[Feser] Stuck in the Map-Territory Gap

As promised yesterday, this is the kickoff of my analysis of and questions about Edward Feser’s The Last Superstition: A Refutation of the New Atheism.  Yesterday, in the index post, I pasted in Feser’s summary of Aristotle’s Four Causes which you may want to refer back to.  I’m interested in Feser’s book because he’s making the pitch that if you believe in moral order and some kind of telos for people, Christianity will follow inexorably.  Since I fit the first part of his modus ponens, I want to take a look at the rest of his proof.

But before I can get into any of the nitty-gritty philosophy, I have a broader epistemological question.  I like the four causes and I find them to be a useful way to clarify some questions, but I’m worried there’s a “map is not the territory” trap here.  Just because Aristotle’s schema is a good framework for conversations doesn’t mean that they are an accurate portrait of the actual structure of the world.  They may just be a good shorthand for whatever we already believe, and they might not add anything to our understanding of the world.

Feser seems to think that Aristotle’s map is isomorphic to the territory, so we can expand our understanding of the world by studying these causes in depth and trying to figure out what you have to presuppose to make them work.  Feser is essentially bootstrapping his way up to metaphysics and theology by formalizing what he already knows and then accepting whatever is logically required to support his formalization.  I can follow a lot of the arguments that happen at this stage, but, even if I find them to be coherent, I don’t necessarily believe that they’re true.

What’s the epistemological hurdle a theory should clear to be considered a reasonably accurate map of the world, not just a summary of my previously held beliefs?  My usual standard is positive predictive value, but it’s hard to set up metaphysical experiments, and I’m reluctant to just give up and decide I can’t say anything substantive about these questions.  How do you strike a balance?

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • Pingback: Feser’s The Last Superstition [Index Post] | Unequally Yoked

  • Touchstone

    It would be relevant here to consider Sarah’s thesis about Aristotle’s Rhetoric. To be brief (and please, Sarah, correct me if I misrepresent you), her claim is that the Rhetoric lays out three types of rhetoric, and also makes it clear that the book itself is to be read as rhetoric. Yet when you ask yourself which of the three categories the book falls into, it turns out that it falls into none of them. This seems to suggest, she argues, that Aristotle understands rhetoric to involve laying out a schema of understanding that is not sound and complete, but nonetheless persuasive and useful, and through which understanding can be reached.

    Translating Sarah into the language of this blog, it means that Aristotle understands (and is interested in) the fact that the map is not the territory.

    Whether or not you agree with this, though, there’s no denying that Aristotle repeatedly argues in the Rhetoric that one should begin from commonly-held understandings when arguing. He follows his own advice (hence why so many sophomoric students of Aristotle think his works are all just common sense). This, too, I think, means that we might at least suspect the Four Causes of being maps, rather than territory. He’s looking at and formalizing (as you point out), four ways of understanding the world.

    Now, the fact that Aristotle arguably thought the Four Causes were maps rather than territory by no means necessarily implies that they aren’t also the territory (or isomorphic to it). But that’s another comment, and one I’ll need to think more about before I post.

  • Tom

    Three comments:

    (1) I wonder what you mean by “formalizing” here. If to formalize some theory T is simply to state it in formal terms (in a way that accurately preserves the content of T), then if some set P of propositions is logically required for a formalism of T, then P is required for T itself. If Feser can show that P is logically required for the formalism of T (and thereby for T simpliciter), and Feser “already knows” (as you say) that T and so T is true, then it would follow that P is true, since if some proposition is logically required for a true proposition, then the proposition is also true, right?

    (2) More generally (and more an answer to your question), most philosophers would say there are some propositions justified a priori and thus independently of empirical predictions. Some philosophers think these are only ‘analytic’: definitions or logically following from definitions, or something related. But most would say that definitions per se aren’t really about “the world,” but instead just about our language. So maybe you want something stronger, such as following from the laws of logic, or something stronger still, such as it being impossible for you to conceive of some P being false. Some metaphysical theories are that way for some cognizers.

    (3) More generally still–now, as I take it, about the entire exchange–I’ve never really had a handle on the question of whether atheists can believe that life has an ultimate purpose of telos. Atheists can certainly believe that human beings ought to do various things, or that a human life that contains such-and-such is a good human life. These are just brute ethical facts, or follow from other brute ethical facts. The theist, of course, imports different, more complex brute facts in order to explain those other facts.

  • http://prodigalnomore.wordpress.com The Ubiquitous

    I would use this image: Asimov’s robots have their positronic brains wired so deeply with the Three Laws of Robotics that one human character, wondering aloud why we just don’t change the laws for a particular robot, is suitably rebuked. Naturally, those were laws which were entirely man-made and hardly infallible, but from the perspective of the robots, it is of course their Wirer who devised them this way. (I wonder if we can tie in here Ratzinger’s argument from intelligibility?)

    As the Laws are for Robots so are the Causes for us. Feser repeatedly argues that the Four Causes are so common sense that, every time someone objects to them formally, as soon as the next sentence a Denier of Causes belies his hypothesis with some other example of believing the Four Causes.

    The point here is not that the Four Causes are right or true. It is that we believe them, that leaving open the question of the Wirer we are wired to believe them, that only with great calculated effort and practice can we say things which do not sound like they invoke the Four Causes, and for only brief periods of time, and, damningly, that if we really did succeed in this we may not even be able to communicate the insight. The question is not so much axiomatic as moot.

    If we, believing the above, are to discuss anything at all, and additionally if we reject for practical reasons extreme skepticism toward our ability to discern any truth at all, we must accept the schema and move on with our lives.

  • Patrick

    Yeah… that’s why a lot of us don’t respect Feser very much.

    Once you’ve adopted that outlook, when faced with someone who disagrees with you, you have very few options. You can lecture them, on the assumption that they just don’t understand what you’re saying, and will agree with you once they understand it. Or you can accuse them of operating in bad faith. There are no other options.

    His position literally requires that it be impossible for anyone to reasonably disagree with him. He cannot conclude that any disputant is both educated and behaving in good faith, because the mere existence of such a person would mean that his theories were wrong.

    Its an interesting parallel to other religious perspectives that claim self evidence as their support, and suggests that the more things change, the more they stay the same.

    • http://prodigalnomore.wordpress.com The Ubiquitous

      If I remember Feser correctly, he was atheist prior to reading Aquinas, theist afterward. Or maybe I’m thinking of someone else.

    • http://last-conformer.net/ Gilbert

      So do you think it is perfectly reasonable to hold both positions on any imaginable binary question even if fully informed?

      If yes, how do you decide among reasonable positions and what is the use of a reason that doesn’t rule anything unreasonable?

      And if no, how again is this a special problem with Edward Feser?

  • keddaw

    Good luck wedding the four causes to virtual particles or radioactivity, let alone wave-particle duality. Or have I completely misread the intent here?

    • http://prodigalnomore.wordpress.com The Ubiquitous

      We might as well explain blueness. Such a refutation of the Four Causes reminds me of typical refutations of the Five Ways. We miss the point on a gloss yet remaining from a third level of translation, and we forget breaking the symbol of a thing does not break the thing.

      Those of us with serious questions about the Four Causes would do well to seek out at least someone as competent as Mike Flynn.

      • http://prodigalnomore.wordpress.com The Ubiquitous

        (As in TOF Spot, not the other guys.)

      • Ray

        I assume this is what you are talking about:
        http://tofspot.blogspot.com/2011/11/is-this-answer.html#more

        Talks a good game, but if you actually know the physics there are big problems.

        On radioactive decay:
        First the big, obvious problem — it’s all well and good to say the weak force causes the radioactive decay of a uranium nucleus, but then what caused the uranium nucleus not to decay 5 minutes before that? There’s also more subtle issues: Does it really make sense to talk about the potential for the weak interaction as part of a radioactive nucleus? Can you coherently posit a uranium nucleus that can’t decay? Maybe not. It would be different in a lot of other ways if you want to make the theory consistent (e.g. it would no longer be identical to the other uranium nuclei, so the pauli exclusion principle would no longer apply. Such a nucleus could then occupy the same quantum state as an existing uranium nucleus, resulting in a stable nucleus twice as big.)

        On common vs proper sensibilities:

        I think this misunderstands why physics uses frequency instead of color. Color can in fact be quite well understood through CIE color charts, the neuroscience of the temporal lobe etc. It’s just that frequency is a more useful concept for solving physics problems (where it matters a lot whether the yellow light came from a sodium lamp vs a 5500 K blackbody vs a mixture of ionized stontium and barium.) I’d further note that if you want to predict what a given person will claim about his or her own proper sensibilities, you do much better studying the person’s brain and visual system than studying the world outside of the person’s head, which at least strongly indicates that proper sensation really is in a person’s head rather than out in the world. Of course the contents of a person’s head are, near as we can tell, just as real and just as physical as the rest of the world.

        On Quantum Theory:
        This analogy I think understates how fundamentally different the quantum world is than the classical world. The quantum world is described be a well defined mathematical formalism in which we can often make extremely precise measurements. However, concepts such as waves and particles from classical mechanics don’t scale down well. It’s not just that you can’t simultaneously measure position and momentum in the same direction, it’s that the two quantities cannot be simultaneously well defined in the (well tested mind you) formalism of quantum mechanics. The fact that the photons you use to observe quantum objects disturbs them significantly isn’t the full explanation of why this is true (if it were, there would always be the possibility of finding a smaller, less intrusive version of the photon.) It is merely an explanation for how the uncertainty principle can be true in the first place and not be disproved by experiment.

        One other comment: Is it that theists have never heard of the many world interpretation of quantum mechanics, or are they just so attached to the idea that quantum mechanics features an “observer effect” which somehow proves mind-body dualism that they ignore MWI?

        • Ray

          oh, by the way, I probably should have used something that undergoes beta-decay rather than alpha-decay as an example. (there are uranium isotopes that do this, but not the common ones.) So pretend I said promethium or something.

          Or not. Alpha decay happens via the strong force, and it makes even less sense to say the nucleus decayed because of the strong force. Nuclei (other than 1H) couldn’t even exist without the strong force.

        • http://last-conformer.net/ Gilbert

          I’ll start on a physics note: The Helium 4 nucleus (2 protons 2 neutrons) has whole-integer spin (2). Helium 4 nuclei are thus not subject to the Pauli principle. They are also stable. Beryllium 8 nuclei (4p, 4n) aren’t. Oxygen 16 (8p, 8n) is again stable. That is because the fact that two nuclei can “occupy the same quantum state” has absolutely nothing to do with “resulting in a stable nucleus twice as big”. To be blunt, entertaining the idea that this would follow is inconsistent with having understood the quantum mechanical concept of a state. I’m uncharitable enough to point this out because you came riding in on the high horse of us stupid theists not getting quantum mechanics. The fact is you yourself “talk a good game, but if you actually know the physics there are big problems.” So please knock it off, don’t try to intimidate us with half-understood side issues and then we can talk as equals.

          On to the philosophical questions:
          On the five minute earlier question, Aristotle (who came up with the four causes) didn’t have a problem with randomness. I don’t know if Aquinas believed in randomness, but if he didn’t that disbelief is at least not relevant to his arguments for God. Pressing it into the conceptual boxes this probably means we have to classify instable nuclei as self-movers (not the same as an unmoved mover) and this might falsify an assumption of Aristotle and/or Aquinas if they thought all self-movers must be alive which I am not quite sure of. But even then the existence of even more self-movers doesn’t invalidate an argument featuring the existence of (fewer) self-movers.

          That a nucleus couldn’t exist without the various forces doesn’t invalidate their being in it. The Aristotelian theory already accounts for things existing in objects that don’t exist without them; they are called propria.

          So basically, as is almost alway the case, quantum mechanics has far fewer philosophical consequences than one might think.

          • Ray

            On the physics:
            Your objection is incorrect for two reasons
            1) Bose statistics is different from the statistics obeyed by collections of distinguishable particles. (identical bosons are MORE likely to be found in the same quantum state than distinguishable particles.)
            2) Treating compound particles with integer spin as if they obey Bose statistics is an approximation that only works at length scales significantly larger than the internal structure of the compound particle. This means that if you’re going to compute things like nuclear stability, you need to do your calculation on the level of individual nucleons, which means the Pauli exclusion principle still holds.

            The standard way of doing this is using the nuclear shell model: http://en.wikipedia.org/wiki/Nuclear_shell_model .

            At the risk of tl;dr, I’ll give some examples of the framework in action just to show that I’m not blowing smoke. The basic idea is that you treat the strong interaction as a potential well at the center of mass of the collection of nucleons you are analyzing. Helium 4 is especially stable, because all four nuclei are in the ground state of this potential well (you can fit four because they are distinguished by whether they are protons or neutrons and whether they are spin up or spin down.) The next nucleon you add, however, will be at a higher energy level. This is why there is no stable nucleus with 5 nucleons. As the well gets deeper, however the added binding energy makes up for the fact that the outer shell of nucleons is at a higher level. This is also why berylium 8 is unstable — the first four nucleons are in an s orbital (like in helium 4,) but the next four are in a p orbital. In the case of berylium 8, it is energetically unfavorable to kick off a single nucleon, but it is energetically favorable to kick off a collection of four tightly bound nucleons (i.e. a helium four nucleus.) Berylium 9 is stable again since the next nucleon can still be added to a p orbital, so there’s no extra kinetic energy, but there’s no way to split the 9 nucleons among nuclei that are as tightly bound as helium 4.

          • http://last-conformer.net/ Gilbert

            This is generating more heat than light, but I also don’t like to bow out of this kind of thing without prior notice. So I will make this last comment on the physics but then none more on your possible reply.

            Your point 1 doesn’t seem to have any relevance to the question at hand. Yes, the Bose-statistic is different from the Boltzmann-statistic, but you were talking specifically about the Pauli principle and for that not holding doesn’t require distinguishable particles, bosons are quite sufficient.

            Your point 2 has a true part. Obviously, if you want to know if the nucleons will aggregate into nuclei you will have to talk about the state of nucleons. But even at that level of analysis you don’t have nuclei obeying the Pauli principle, because you simply have no nuclei at all. Now once we know there are, say, two nuclei with whole-integer spin, treating them as bosons is not strictly an approximation but simply exploiting prior knowledge. Because in that case the nuclei exchange operator is a product of an even number of nucleon exchange operators and the state vector being antisymmetric in the nucleons mathematically implies it being symmetric in the nuclei. It would quite simply be self-contradictory for a state vector to be antisymmetric in the nucleons while not being symmetric in a even number thereof. So as long as we already know there are separate nucleons limiting us to the nucleus-symmetric sub-space of state space doesn’t impose anything on reality, because we already know the real state vector to be in that sub-space. So at one level of analysis you have no nuclei at all and at an other lever at analysis you have nuclei not bound by the Pauli principle, but there is no level of analysis at which whole-integer spin nuclei both exist and obey the Pauli principle. And you talked specifically about the nucleus being “no longer be identical to the other uranium nuclei, so the pauli exclusion principle would no longer apply”. And that is wrong, because as long as the nuclei exists they don’t obey the Pauli principle even now.

            Now you might say you were concluding from the hypothetical non-decaying nucleus being a different kind of object that its constituent parts would be different kinds of objects from those of an ordinary one, so that the Pauli principle wouldn’t apply between them and the ordinary stuff. But taking that conclusion as valid would already presume the decayability to not be a property of the nucleus, i.e presume wrong the possibility you were trying to reduce to absurdity. If, for example, the decaying and the non-decaying variant were different states of the same system separated by an unsurmountable energy barrier that would do the trick without affecting the constituent parts at all. I don’t believe such an alternative state exists because of friar Occam’s razor, but there is certainly nothing logically contradictory about it.

          • Ray

            I’ll confess I haven’t been expressing myself as well as I could have. I’ll see if I can do somewhat better:

            1) The main point is that considering properties like “participation in the weak force” as properties of single particles rather than as applying to all the particles of a single species is problematic. Both Bose and Fermi statistics enumerate the ways in which particles of a given species may differ from one another, and “participation in the weak interaction” is not one of them. You can’t just pick a nucleus and say “this nucleus cannot decay via the weak interaction” without some unintended consequences.

            If you turn off the weak interaction nucleon by nucleon, or quark by quark, the result is that you can fit more nucleons in the low energy nuclear shells, because you have more species of low mass baryons. This changes the whole pattern of nuclear stability. If instead you are modifying the nucleus as a whole (which would not be the natural way to do it, but whatever) then you can only be guaranteed a change in behavior with respect to more subtle phenomena, like bose-einstein condensation.

            Finally, positing the two species as two states of the same particle will not save you. The distinction between two states of the same particle and two different species of particles is mostly one of convention. Thus it may be convenient to treat u and d quarks as “isospin” states of a single light quark, and treating the two polarizations of the photon as different species of particles for the purposes of the two slit experiment will get you the right interference pattern (if you use optics to ensure the photons going through the two slits are of opposite polarization, you won’t get interference fringes. Try it.)

          • Ray

            Oh yeah, there was one other thing: I’m not sure I quite properly described Bose statistics for compound particles. Clearly, if you’re not just making an approximation, you had better get the same answer treating things at the level of nuclei or at the level of nucleons. In the latter case you have the Pauli exclusion principle, so something with the same effect must be included if you’re doing calculations with the whole nuclei. I believe the way this is resolved is that helium nuclei DO obey Bose statistics, but it is the Bose statistics of particles that strongly repel one another at short distances. It is this repulsive potential which will do the same work as the Pauli exclusion principle.

  • Ray

    You don’t need quantum physics to demonstrate that the four causes cannot be isomorphic to the territory. Calculus will do just fine. It seems to me that the four causes framework (at least as used in the five proofs) assumes that if I ask “what is the efficient cause of X?” or “what is the final cause of Y?” I will get a unique answer. Needless to say, this doesn’t really work with continuous variables like we get in any calculus based physics. This is also closer to how things happened historically. Aristotle’s philosophy was already considered outmoded and deeply flawed by the end of the 17th century. It didn’t have to wait until the early 20th century when the quantum revolution happened.

    • http://last-conformer.net/ Gilbert

      First, the framework doesn’t assume final causes to be necessarily unique. For example, the mouth has two final causes, talking and eating.

      But more importantly, I don’t see the supposed conflict with continuous variables. While he of course didn’t know of the completeness axiom, Aristotle actually would have known there are irrational numbers, so he doesn’t seem to have seen a problem there either.

      • Ray

        The first problem with the multiplicity of causes is that it brings into doubt whether the full list of the causes of a thing can be unique and well defined. (e.g. someone like Dawkins would argue that the mouth, not being designed by an intelligent agent, only has the illusion of telos, leaving the list of final causes empty, while someone with a more permissive attitude toward sexuality than the average Thomist might add oral sex to the list of final causes of the mouth.) If one is to claim that Aristotle’s map is isomorphic to the territory, there better be some feature on that map, which is uncontroversially there and in one to one correspondence with each feature of reality.

        That said. The biggest problem with continuous variables is that they bring into doubt whether an infinite regress is really impossible. Thus it seems one can validly find a cause for an event in the state of the world 1 second prior, half a second prior, a quarter of a second prior and so on. Thus one could create an infinite regress of causes in the space of 2 seconds. Aristotle knew of this problem in the form of Zeno’s paradoxes, and his solution is to divide the causal chain into a finite collection of periods of time where nothing happening in each period exhibits any “change” in some philosophical sense. The problem is, this vision looks a lot more arbitrary where you regard physics as the solution to some complicated differential equation rather than a series of discrete events separated by periods of uniform motion.

        Now quantum mechanics does complicate this somewhat, and depending on interpretation you may be able to solve some of the problems, but I don’t think any interpretation solves all the problems with Aristotelian philosophy simultaneously. In any event, my favored interpretation (MWI) is straightforwardly continuous.

        • http://last-conformer.net/ Gilbert

          On the unknown multiplicity of causes I simply don’t see the problem.

          Your two example alternatives for the mouth’s final causes are actually are very different kinds of objection. The “empty final causes list” argument would equally apply to any other natural object, i.e. it is just a rejection of final causes. Yeah, “someone like Dawkins” would deny classical metaphysics, I would have suspected that even if you hadn’t told me. That someone might reject it is not in itself an argument against an idea, for someone might reject every idea.

          As for the oral sex part, I think there are good reasons against that assumption, but right now I don’t want to jump down that kind of rabbit hole because the whole sexual morality question is not relevant to what we were talking about. So I’ll just say: “So what?” OK, people may disagree about the final causes of some object, and even their number, but how does that affect the concept of final causation? By analogy, people may also disagree about the number of peas in a pea-filled glass, but that disagreement doesn’t make them suggest that the glass might actually be empty.

          As for continuous variable and infinite regress I don’t think Aristotle did what you claim he did. Rather he introduced a distinction between actual and potential infinities, which of course opens a different can of worms.

          But more importantly, Aquinas didn’t have a problem with every infinite regress. He didn’t think it philosophically provable that the universe had a beginning, though he did believe it as a revealed fact. And in this he did allow for infinite causal series’s. By his example, the causal chain between fathers and sons could theoretically go on infinitely, because once the sun is there and grown up he can procreate of his own power without involving his father. If you compress all of that kind of chain into two seconds that is certainly cute, but it doesn’t add anything new, because infinite causal lines are already sometimes allowed. In both cases you would have what a Thomist calls an accidentally ordered series. But there are also essentially ordered series’s. Those are series’ where the elements don’t have an own power to affect later elements other than as instruments of the prior elements. And then you can’t explain the power by infinite regress. This thought is not at all foreign to modern maths. For example the integers go infinitely far in both directions. So far, so good. But if you prove an induction step on the integers, that is A(n)=>A(n+1) you can’t take infinite regress to claim \forall n\in Z.A(n). Aquinas is opposed not to infinite regress per se but to infinite regress on essentially ordered series’ and that is simply not affected by a time continuum. In fact the elements of an essentially ordered series are paradigmatically seen as simultaneous.

          • Ray

            On Final Causes
            The main point is that the term “final cause” is ambiguous enough that it may denote several different things. This leaves arguments evoking final causes at serious risk of committing the equivocation fallacy.

            Dawkins is an excellent example of this in fact. He does not deny purpose either in the sense of evolutionary adaptation or in the sense of human designs, however he feels the two are sufficiently different concepts that they should be denoted by the terms “archaeo-puropse” and “neo-purpose”. My other example shows that purpose in the sense of “how a thing ought to be used” is also open to multiple interpretations, and unlike the case of the pea-jar where there is agreement about how to decide who’s right, there may be no such agreement, even in principle, on how to decide the appropriate level of sexual permissiveness.

            On Aristotle:
            He absolutely does what I claim he does
            http://classics.mit.edu/Aristotle/physics.6.vi.html (see Part 3) At the very least he denies the concept of instantaneous velocity.

            On essentially ordered and accidentally ordered series:
            Those of Aquinas’s proofs that invoke infinite regresses do so for efficient causation, where it is in no way obvious that one requires a chain of causation any less accidentally ordered than in the case of a sequence of fathers and sons.

            The more convincing examples seem to come from math (which I guess is formal causation.) However, even there, the old Greek dream of proving all true statements from a finite set of axioms has been definitively killed by Godel. True does not always imply provable nor does real always imply caused.

            Of course, now I am citing things that happened much later than Aristotle’s philosophy had lost most of it’s currency. I suppose Aristotelianism died by a thousand papercuts. Perhaps there never was a definitive result that proved the more metaphysical side of Aristotle wrong or inconsistent. But, many of Aristotle’s claims that were open to empirical refutation were refuted by Galileo, casting doubt upon the propositions that were not so easily tested. Also, better tools were devoloped for thinking about mathematics and the physical world. While Aristotle weaseled his way out of Zeno’s paradoxes with difficulty, calculus and measure theory solved them with an elegance and generality that Aristotle could only dream of.

            It is interesting to note that even knowing all of modern physics, one can still be a self-consistent geocentrist if one is willing to sacrifice enough in the way of simplicity and elegance. Likewise one can still be an Aristotelian or a Thomist. One wonders, however, what is the point?

  • Joe

    If you already believe in moral order and a telos for people but aren’t sure if Aristotles map fits the territory then it’s up to you to come up with a better one. I’m not sure that’s possible.

  • Joe

    Forgive me if what I say is dumb. But perhaps the map is perfect but our empirical understanding of the territory is incomplete? Is it the job of metaphysics to keep up with physics or vice versa? Aren’t scientists always trying to find the cause of material phenomena?

  • Pingback: Three First Causes I Don’t Pray To | Unequally Yoked

  • http://last-conformer.net/ Gilbert

    If you believe yourself rational and are sufficiently interested you presumably believe your beliefs to be a reasonably accurate map of the world. If you thought an other set of beliefs offered a better map you would presumably adopt it. And if you can’t think of better map than the one you believe, then your beliefs are the best map you can think of, which is reasonably accurate. Now there might be large areas of reality where you didn’t bother to make the best map you can make (there certainly are for me) but that is not an inherent problem of your map-making techniques. So you shouldn’t entertain an insurmountable antagonism between a “reasonably accurate map of the world” and “a summary of [your] previously held beliefs”.

    Now there might be the additional problem that even the best map you can come up with lacks important details and that problem is real. The best strategy I can come up with is to expose your map to nit-pickers who really don’t like your conclusions and see what additional details they can come up with. But in any case, omitting things you already know of isn’t a solution, it will only make the map even less accurate. Short of omniscience you will always be using a map and deciding it’s too bad to rely on is just a decision to rely on an even worse blank map.

    Now if something just formalizes what you already believe you might ask what that adds. But asking that question is answering it: the formalization. With formalization you can then proceed to work out possible contradictions in your beliefs or between them and your experience and then see how you can refine them to avoid those contradictions. That is basically what philosophy does and it’s nothing to be ashamed of.


CLOSE | X

HIDE | X