Just Give Me One Good Reason Not to Change My Mind

You wanted bigger guns?

Eli of Rust Belt Philosophy thinks my LARP Your Way to Truth advice is a snare and a delusion. Let me post some excerpts, but you should pop over and read the short takedown piece in full for fairness:

To begin, I cannot begin to imagine why Libresco thinks that curiosity only happens in the context of suspended disbelief. Perhaps she herself has that sort of cognitive limitation – though I doubt it – but it’s hardly something that the rest of us have to deal with. I also have a really hard time believing that hypothetical scenarios are best evaluated by imagining (or attempting to imagine) the effects of those scenarios on one’s own life. I mean, let’s say that we’re talking about the potential existence of sentient alien species on other planets – is that really something that is at all analyzable in terms of its effect on my day-to-day existence? If so, I fail to see how. Most importantly, though, Libresco totally misses the fact that humans are optimized for aggressive, raised-hackles, high-stakes reasoning. Asking us to reason with a constructive curiousness – to reason, that is, in a state of suspended disbelief or make-believe – is tantamount to asking us not to reason at all.

[Eli cites the recent study on choice blindness and says...] These poor people inadvertently did more or less exactly what Libresco is asking for: they didn’t bother to “think about all the reasons that it can’t be that way” but instead “just accept[ed] it as a premise” and went from there.

Once more, in summary: human reasoning faculties are practically weaponized in their forms and functions. But because it’s unpleasant to be at the pointy end of the stick, we rarely turn these weapons on ourselves, which means that we rarely force ourselves to achieve a rational response. We do this least of all when we assume the truth of an idea and engage in mere low-stakes, just-go-with-it play-acting – when, for instance, we want to build a relationship, or “curiously” explore an imagined world, or defend an idea that we attribute to ourselves (even if, as in the experiment, that attribution is fictitious). In short, the evidence indicates overwhelmingly that we reason best when we reason competitively, and that competitive reasoning (just like competitive anything else) usually only takes place when there’s someone else to compete with.* Libresco’s approach would have us abandon the competitive approach in favor of something warmer and fuzzier, but that’s literally the exact opposite of what the evidence indicates we should do. Our warm-and-fuzzy reasoning is so embarrassingly bad that it allows us to switch positions in a matter of minutes, and that’s what she thinks we should use to “think honestly”? Seriously?

Let me do a quick clarification. I don’t think the LARPing strategy or the Leave a Line of Retreat exercise should be used to prove something to yourself.  Plenty of false worlds can be fun/thought-provoking to LARP in.  My first leap of faith was the result of thinking about the Young Wizards mythology and discovering that, if it was true, I was pretty sure I wanted to offer it my loyalty.  That realization meant I was willing to take the Wizard’s Oath; it didn’t blind me to the fact I wasn’t offered wizardry as a result.

The LARPing game is a prelude to the data gathering/evaluating process.  We don’t usually know our opponent’s arguments as well as we thing, so imagining the argument/worldview from the inside sharpens our vision and helps us make sure that we’re addressing the actual strengths and weaknesses of our opponent.

This is intended as a way to “turn the weapon on ourselves” as Eli wishes, but it’s not the whole process.  Eli’s right that we tend to flinch away when we find a belief that it hurts to doubt.  The LARP exercise is my way of sidling up on my beliefs so I can notice a weak point on my side or a strength of the opponent before my defenses can engage.  One I’ve spotted and acknowledged it, I’ll probably go back to testing it using less wacky ways of thinking.

It might be possible to overwhelm your ugh fields with a frontal assault, but I’m not very good at it.  I’d love to have Eli post, here as a guest or on his own blog, about how he trains that strength and applies it.  I’d also like to see the citations that show that the weaponized approach to thinking is the most effective.  I’m pretty sympathetic to the LessWrong caution against treating arguments like soldiers.

One problem is that when I engage my defensive strength, it’s easy to lapse into thinking I’m defending my current best estimate of truth instead of truth.  That makes it hard to notice I’ve erred and joyfully cede the field.  Another problem is that a weaponized defense is better at keeping me unmoved than persuading others.  I don’t need to provide a strong argument for my own position, I just need to keep showing that the other side is weak.

And pretty much everyone is weaker than Vera

For example, when I read a study whose results I doubt or don’t like, I tend to flip to the methodology and look for anything that falls short of ideal procedures.  Then I’m safe, because the study isn’t trustworthy.  I bet that a really good study would have validated my position, and if that ideal study is logistically unfeasible, ah well.  This kind of thinking is what drives confirmation bias.  If Eli got to pick, I think he’d rather fight a Christian interlocutor who’s curious instead of just aggressive.

I’m fairer to my opponent and serve myself better if I get curious instead of angry.  Ok, so this study is flawed.  Is it flawed in a way that makes it useless, or can I think about which direction it’s biased in and my how much?  If I actually had to research this problem, instead of critique papers in seminar, what would the proper study look like?  If it’s unfeasible, how could I most accurately glean what I want to know?  Do I want to actually try to run some of this on Mechanical Turk?

Aggression is satisfied when I beat a person.  Curiosity is only satisfied when I beat myself and the whole world and know something that I didn’t know before.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • http://www.ephesians4-15.blogspot.ca Randy

    I think Eli’s post shows an excellent example of the arrogant type of reasoning that simply looks for holes in another person’s position rather than trying to understand it. It is very common. We see it especially when the person has a strong emotional attachment. Religion and politics are the places to find this. I am not sure LARP is the right answer either. I think there needs to be a humanizing of the other side. Something that makes your default response one of interest rather than one of dismissal. Often it is the person you gain respect for first and the ideas second. LARP tries to invert that.

    • http://rustbeltphilosophy.blogspot.com Eli

      Did you even read the full post? Would you care to respond to any of its points in particular? Or are you so disturbed by the style that you didn’t bother to do that sort of stuff?

      • R.C.

        Eli, I gotta say, most of what Leah said in her clarification was exactly the assumptions I had when reading her suggestion of LARP-ing initially. Obviously the LARP-ing isn’t critical thinking in-and-of itself, but rather a way of overcoming a bias against uncomfortable changes to one’s worldview; this has to then be brought back within the sphere of critical thinking after-the-fact. From what I know of Leah, I anticipated that that was her approach, all along.

        When I read your critique, it seemed as if you had few of those assumptions. I’ll tell you flat-out: I did not analyze your post deeply, because the first impression I had was, “This guy went into reading Leah’s post itching to find reasons that she was a dishonest or bad thinker, and, naturally enough, got his bias confirmed.” The tone was part of that, but the sheer distance between my own assumptions about how Leah was and was not proposing LARP-ings usefulness, and yours, made me conclude you’d given her original proposal a quite ungenerous reading: No willingness to give the benefit of the doubt.

        I mention this because the same criticisms could, I suspect, have been expressed by expressing concern that Leah might be using the LARP approach to undermine rationality rather than maximize it. The fact that you didn’t do that fell afoul of my “Ugh Field” and made it difficult for me to suppress my own reflexive irritation long enough to finish reading your post carefully. Part of me wanted to give you the benefit of the doubt, but the other part was thinking, “Why bother?”

        I’m just one guy reporting this reaction. One data point does not make a trend. But I’d be surprised if I was alone.

        • http://rustbeltphilosophy.blogspot.com Eli

          “Obviously the LARP-ing isn’t critical thinking in-and-of itself…”
          Sorry, but have you been reading any of these comments? The LARPing thing isn’t critical thinking – that much is certainly true and everyone seems to agree about that – but her original claim works only if it’s useful for truth seeking in and of itself, not merely as a stepping stone. (She continues to assert, for example, that LARPing is useful for spotting contradictions.) And, indeed, she has to say this, given the way she’s laid out her position; nothing short of that claim will give her the basis to say what she has said. But she’s wrong: it isn’t a truth-seeking thing in and of itself, and it can’t be.

          “From what I know of Leah, I anticipated that that was her approach, all along.”
          For the record, this is why I don’t do charity: you’re intentionally overlooking the worst parts of her argument because you know her and trust her. That’s not too great, reasoning-wise – all you’re doing is creating blind spots where you can’t check connections between ideas or the extent to which those ideas conform with reality.

          “When I read your critique, it seemed as if you had few of those assumptions. I’ll tell you flat-out: I did not analyze your post deeply”
          Well, now I know how much your commitments in the area of interpersonal reasoning are worth to you.

          “I mention this because the same criticisms could, I suspect, have been expressed by expressing concern that Leah might be using the LARP approach to undermine rationality rather than maximize it.”
          Yes and no. The same semantic content could have been conveyed in that way, sure. But the very fact that you have a different reaction depending on tone means that there’s a facet of my criticism that depends on harshness and bluntness; my harshness and bluntness are, accordingly, intentional. That facet, I find, is what makes the difference between

      • http://www.ephesians4-15.blogspot.ca Randy

        Actually I was mostly talking about the tone. But I do think tone indicates bias and bias effects analysis. So when you open with:

        Sometimes I just want to learn how to fish and move to some tiny island nation where I can’t speak the language. I’d just live in a hut, sustain myself on roasted poi (or whatever) and whatever I could garden, and never have to deal with this sort of nonsense ever again.

        That tells me you are not interacting charitably with your subject matter. You are assuming Leah’s idea is just stupid. Since she is not stupid it is likely you don’t know what you are talking about. So this tone sets up my bias and makes it harder for me to rationally analyze your post.

        Personally I think reasoning from a new angle is often profitable. That does not mean you discard critical analysis completely. But doing some thought experiments can be useful especially when you are reducing your emotional reactions to certain things by changing the picture in your mind. If you find out your issues with a certain view are more emotional than logical that would be interesting.

        Your devotion to “reason competitively” is OK but I think you have a particular thing in your mind when you say that. There is an assumption that religious people don’t do that. They do. It seems like there is more fear in your post. That somehow someone can get put under the spell of religion and lose their ability to think. I would say the opposite. That sin is what causes us to lose our ability to think. That people can get put under the spell of pride or lust or anger and become irrational. That is why placing too much faith in reason is a mistake.

        • http://rustbeltphilosophy.blogspot.com Eli

          “That tells me you are not interacting charitably with your subject matter.
          And I don’t try to. Charity is a flawed heuristic and I reject it openly.

          “You are assuming Leah’s idea is just stupid.”
          Nope – I’ve concluded that the idea is stupid. Big difference. Also, I didn’t say that it’s *just* stupid. It may be stupid and other things, such as psychologically healthy – that’s certainly a possibility.

          “Since she is not stupid it is likely you don’t know what you are talking about.”
          Oh, come off it – this is a stalemate argument every time. I’m not stupid, either, so by your logic it’s just as likely that you don’t know what you’re talking about. Seriously, this particular line of attack ends the same way every time; it’s a movie I’ve seen before and it bores me.

          “Personally I think reasoning from a new angle is often profitable.”
          But it’s not reasoning, it’s imagining.

          “But doing some thought experiments can be useful especially when you are reducing your emotional reactions to certain things by changing the picture in your mind.”
          So then you agree with me that its value isn’t in its accuracy or its fidelity to the truth, then? Because that was a significant part of my point. We can get to the other part of my point later, but let’s iron this one out first.

          “Your devotion to “reason competitively” is OK but I think you have a particular thing in your mind when you say that. There is an assumption that religious people don’t do that.”
          Wha huh? What in blue blazes makes you think this? This is completely inaccurate.

          • Brandon B

            “But it’s nor reasoning, it’s imagining.”

            Our ability to imagine helps us to reason. Before you can analyze something, you need to know what it is. When all we had to reason about was physical tools, like rocks and food and whatever, we could reason accurately about them with a very limited imagination. There is no way to grasp “Catholicism” physically, however, (or “quantam physics”, or “democracy”, etc.) so we need to deliberately use our imaginations to be able to begin reasoning about them. Leah seems to proposing that atheists use more imagination than they were previously, because it might give them a more accurate picture of Catholicism.

          • http://rustbeltphilosophy.blogspot.com Eli

            “Our ability to imagine helps us to reason. Before you can analyze something, you need to know what it is.”
            I think you and I are using the word “imagine” in very, very, very different ways.

            “When all we had to reason about was physical tools, like rocks and food and whatever, we could reason accurately about them with a very limited imagination.”
            Oh? Can you prove this? Can you even present evidence for it? Can you even, for that matter, present evidence that we had “a very limited imagination” at the time in question? Because I think this claim is nonsense.

          • http://www.ephesians4-15.blogspot.ca Randy

            And I don’t try to. Charity is a flawed heuristic and I reject it openly.

            That explains a lot

            Nope – I’ve concluded that the idea is stupid. Big difference. Also, I didn’t say that it’s *just* stupid. It may be stupid and other things, such as psychologically healthy – that’s certainly a possibility.

            The point is you don’t really get where she is coming from. If you did you would be able to explain her thought process and where she differs from you. So “stupid” becomes a synonym for “I don’t know how she got here but I am sure she is wrong.”

            So then you agree with me that its value isn’t in its accuracy or its fidelity to the truth, then? Because that was a significant part of my point. We can get to the other part of my point later, but let’s iron this one out first.

            I think the assumption is you don’t already know the truth. So one value would be that your new pseudo-paradigm might actually be more accurate than your present one. But even if it is not it can offer value. Catholic teaching says you can always learn something about God by interacting with another person’s spirituality. From a pure logic point of view that seems right as well. We see paths of reasoning that might prove useful somewhere else.

            Wha huh? What in blue blazes makes you think this? This is completely inaccurate.

            Glad to hear it. Sorry if I jumped to a wrong conclusion.

          • Ian

            Not really related, but just (honestly) curious.

            Is “a flawed heuristic” somewhat redundant?

            I’m not well-versed in philosophy, but but in my understanding of the term, flaws are implied in heuristics. Would “more flawed” or “very flawed” be more accurate?

          • http://rustbeltphilosophy.blogspot.com Eli

            @Ian – yeah, that’s basically a fair point. I’d go with “tremendously flawed,” in this case.

            @Randy
            “The point is you don’t really get where she is coming from. If you did you would be able to explain her thought process and where she differs from you.”
            So is that what matters to you – that I get where she’s coming from? Because I would have thought that the more important question is whether she’s right or not. Whether I get where she’s coming from or not has nothing to do with whether her ideas about LARPing are accurate, nor do they even affect the cogency of my objections. This is a standard philosophy professor’s move, this “get where they’re coming from” thing, only they tend to use the word “project” (as in, “Leah’s project in this post was to…”). I don’t buy that, either: I don’t have to care a whit about her project in order to see something wrong in her reasoning.

            “So one value would be that your new pseudo-paradigm might actually be more accurate than your present one.”
            Right, sure, but this is an awfully weak point in its favor. I could, conceivably, also throw a bunch of magnetic poetry up against the wall and have that deliver a more accurate paradigm than my present one – but does that make that a good epistemological method? I would tend to think not.

            “Catholic teaching says you can always learn something about God by interacting with another person’s spirituality. From a pure logic point of view that seems right as well.”
            Uhhhh what? How is “another person’s spirituality” (whatever that is) analogous to a hypothetical scenario in your own head? Also, again, if your point in favor of LARPing is that it “might prove useful somewhere else,” then I sort of have to think that I win. I mean, I guess I don’t disagree with that, but Leah has said that she thinks that LARPing is good (though not great) for reasoning about the thing you’re LARPing about. If your strongest defense of LARPing is that it might possibly be useful when it comes to thinking about a different subject, then it sure seems like you should disagree with her, too.

          • http://www.ephesians4-15.blogspot.ca Randy

            The charity is towards the person you are interacting with. It is not towards the idea. Some else used the phrase “Don’t be an asshole.” That is similar. You might find that tremendously flawed. I would find that very sad if that was the case.

            … skipping a little …

            Leah has said that she thinks that LARPing is good (though not great) for reasoning about the thing you’re LARPing about.

            Sure it is. But assuming it is false then understanding it better has limited value. But my world and life view is big. I can learn logic from atheists and contemplation from Buddhists and obedience from Muslims. If you have a narrow world view then maybe nothing you learn will matter at all.

          • http://rustbeltphilosophy.blogspot.com Eli

            “The charity is towards the person you are interacting with. It is not towards the idea.”
            Then what’s the point? I’m not attacking the person, I’m attacking the idea (or, more accurately, the system of ideas); the idea is what’s true or false, not the person.

            “I can learn logic from atheists and contemplation from Buddhists and obedience from Muslims. If you have a narrow world view then maybe nothing you learn will matter at all.”
            Oh, relax – we’ve been here already in this comment thread. To repeat myself, the “learning” that happens through art [or cultural imitation] would more accurately be described as maturation (or, attempting to remove the normative aspect, personal development). That is “learning” of a sort, but it’s not a philosophical sort.

            I thought we were discussing true and false – if you want to start talking about this other kind of learning, we’ll have to start from scratch.

  • Alex Godofsky

    Maybe I’m just misunderstanding this whole thing, but is the point of contention really just about what affect we should attach to counterfactuals when reasoning about them? Isn’t that answer going to very a lot from person to person just based on personality? It probably even varies from time to time based on a person’s mood.

    Also, am I right that the snarkiness at the beginning of his post is that he thinks “assume Christianity” results in a logical contradiction a priori?

    • http://rustbeltphilosophy.blogspot.com Eli

      “is the point of contention really just about what affect we should attach to counterfactuals when reasoning about them?”

      Nope – at least, not from my side. My point of contention is that I don’t think that LARPing (or really any similar thing) is a good way to “accurately imagin[e] what the world would be like if a claim [such as Christianity] were true”; see my comment below.

      “Also, am I right that the snarkiness at the beginning of his post is that he thinks “assume Christianity” results in a logical contradiction a priori?”

      More or less. But okay – let’s say that for some reason you disagree about Christianity in particular. Even so, you have to admit that there is no way to “accurately imagine” a world that’s logically impossible. Yes?

      • Alex Godofsky

        re: LARPing, eh. It might help some people and it might not help others. I have no bone in this fight. I do think claims that a particular thinking aid is systematically flawed OR beneficial probably overstate themselves.

        re: contradictions, yes of course. I was just trying to figure out if that’s what you were saying because you were being coy.

        I think there are plenty of statements of Christianity that don’t deductively lead to a contradiction; (as an atheist) I don’t have a lot of difficulty imagining that counterfactual, though perhaps I just never perform enough deductive steps.

    • leahlibresco

      “is the point of contention really just about what affect we should attach to counterfactuals when reasoning about them?”

      Yep — at least, from my side. And re: Eli’s second point in his reply to you, I think we need to be able to imagine and poke around to spot that a world’s logically impossible. When you hit a contradiction, you need to discard one of the premises, or the rule that labels it a contradiction, and you need to relax and think about all the consequences to spot which. It’s almost always one of the premises that needs to go, but sometimes it’s our intuitions that are wonky (see most people’s reaction to quantum mechanics as paradoxical).

      • http://rustbeltphilosophy.blogspot.com Eli

        “And re: Eli’s second point in his reply to you, I think we need to be able to imagine and poke around to spot that a world’s logically impossible.”

        Yeah, see? So the LARPing thing *is* (at least very, very similar to) one of your tools that’s “used to prove something to yourself.”

        • Anonymous

          Constructive proof methods differ from methods for discovering contradictions. Oftentimes, the latter helps us figure out what we’re likely to be able to prove using the former. Most researchers would be lost without the latter. You can’t just, ya know, make up the theorems that you’d like to prove. You generally have to have an idea of what might be possible.

          • http://rustbeltphilosophy.blogspot.com Eli

            I don’t understand a single thing you’ve just said – at least, not in context.

            “Constructive proof methods differ from methods for discovering contradictions.”
            Not least of all in that constructive proofs demonstrate existence or compatibility, whereas contradictive proofs demonstrate nonexistence or incompatibility. But so what? Both are means of “proving something to yourself”; both are epistemological tools; both are used to supply evidence (in a broad sense of the term). Also, I have to point out, Leah originally billed LARPing as a way to “build” something, not as a way to find contradictions. And, last but not least, neither of these is “just about what affect we should attach to counterfactuals when reasoning about them.” So…what’s your point, exactly?

            “Oftentimes, the latter helps us figure out what we’re likely to be able to prove using the former.”
            Sure, and?

            “Most researchers would be lost without the latter.”
            I guess? I’m not sure that this is true, though.

            “You can’t just, ya know, make up the theorems that you’d like to prove.”
            Actually, you can. I don’t know why you think you can’t.

            “You generally have to have an idea of what might be possible.”
            Which you obtain by…imagining things? Sorry, but that’s not how proofs work. Never has been, never will be.

          • Anonymous

            Both are means of “proving something to yourself”; both are epistemological tools; both are used to supply evidence (in a broad sense of the term).

            Yep. You’ve figured it out.

            Also, I have to point out, Leah originally billed LARPing as a way to “build” something, not as a way to find contradictions.

            Finding contradictions is often a useful tool when you’re on your way to building something.

            And, last but not least, neither of these is “just about what affect we should attach to counterfactuals when reasoning about them.”

            …how is learning about the implications of ‘Not A’ not potentially useful when proceeding to prove something involving A? It may not directly lead to the conclusions you want, but it could easily be helpful.

            So…what’s your point, exactly?

            You’re completely right. It’s a tool for learning things… things which could be useful in proofs.

            “You can’t just, ya know, make up the theorems that you’d like to prove.”
            Actually, you can. I don’t know why you think you can’t.

            Think of any theorem. Seriously, pick any one. Now prove it. I’m betting you’re going to spend some time thinking about the ingredients involved. What each piece implies… what the negation of each piece would imply. You’re going to live in those worlds for a little while, and then you’re going to come up with a method of proof. Now, make up your own conjecture. How do you think people have come up with the major open conjectures? They lived in a different world for a while, and realized that something may be provable.

            I don’t understand a single thing you’ve just said – at least, not in context.

            Add the context that Leah likes to talk about moral theory in terms of axioms/theorems, and now you understand everything!

          • http://rustbeltphilosophy.blogspot.com Eli

            “Finding contradictions is often a useful tool when you’re on your way to building something.”
            But is isn’t *part of* building something, any more than filling up your bike tires is a way of biking from one place to another. Try to keep that straight, it’s a fairly basic sort of distinction.

            “And, last but not least, neither of these is “just about what affect we should attach to counterfactuals when reasoning about them.”

            …how is learning about the implications of ‘Not A’ not potentially useful when proceeding to prove something involving A?”
            It depends – if the “implication” is, “Not A makes me sad,” then that’s not potentially useful (mostly because it’s not a real implication). Remember, you made a claim about affect, not effect. (Unless, of course, that was a typo? In which case we can scuttle this line of debate.)

            “It’s a tool for learning things… things which could be useful in proofs.”
            If you can tell me how “Not A makes me sad” is a useful premise in an informative proof for the truth of A, I’ll be super-duper impressed. Otherwise, maybe you should retract this claim.

            “Think of any theorem. Seriously, pick any one. Now prove it…”
            Uhm, this has nothing to do with what you said. You said that you couldn’t just *make up* a theory, which has nothing to do with proving a theory.

            “How do you think people have come up with the major open conjectures? They lived in a different world for a while, and realized that something may be provable.”
            At most this proves that you can’t just make up a major open conjecture, not that you can’t just make up any old theory at all.

          • Anonymous

            But is isn’t *part of* building something, any more than filling up your bike tires is a way of biking from one place to another. Try to keep that straight, it’s a fairly basic sort of distinction.

            Again, we agree. They are not the same thing… but filling up your bike tires sure as heck is helpful, isn’t it? …soo, maybe LARPing could be helpful… without being the same thing as a constructive proof… which is actually exactly what was claimed.

            Remember, you made a claim about affect, not effect.

            Alex made the first statement, I copy/pasted and ignored what I thought was a typo. How about we scuttle it, because I don’t think you’re fighting what you think you’re fighting.

            If you can tell me how “Not A makes me sad” is a useful premise in an informative proof for the truth of A, I’ll be super-duper impressed. Otherwise, maybe you should retract this claim.

            Lolz. I think you’ll want to scuttle this too, unless you really think you can demonstrate that anyone is hinging anything on “something makes me sad”.

            “Think of any theorem. Seriously, pick any one. Now prove it…”
            Uhm, this has nothing to do with what you said. You said that you couldn’t just *make up* a theory, which has nothing to do with proving a theory.

            “How do you think people have come up with the major open conjectures? They lived in a different world for a while, and realized that something may be provable.”
            At most this proves that you can’t just make up a major open conjecture, not that you can’t just make up any old theory at all.

            Perhaps my writing was not clear. My point is that you are horribly unlikely to just guess a conjecture which is provable and interesting… unless you’ve imagined living in the relevant worlds for a while. That is what I meant by saying that you “can’t just make up a theorem that you’d like to prove”. Shooting for the end result without putting in the legwork simply doesn’t work (unless, of course, you’re the luckiest man alive). The tools we use when doing the legwork don’t show up in the concise, published paper. They aren’t even necessary for the proof… but they’re the reason our mind was able to find the proof (and possibly, even the conjecture). Can you distinguish these things… and still accept that the chronological former can be helpful for accomplishing the latter?

          • http://rustbeltphilosophy.blogspot.com Eli

            “They are not the same thing… but filling up your bike tires sure as heck is helpful, isn’t it? …soo, maybe LARPing could be helpful…”
            facepalm…

            I said that finding a contradiction was like filling up your tires, not that finding a contradiction was something that was likely to happen while LARPing. The whole point here is to figure out what good LARPing is, not to name a good and then assign it to LARPing without really thinking about it.

            “Lolz. I think you’ll want to scuttle this too, unless you really think you can demonstrate that anyone is hinging anything on “something makes me sad”.”
            This is an example of what “affect” means, dude – it’s not my fault that you intentionally copy/pasted something that didn’t say what you meant. I can only respond to what I see on the screen, right?

            “Perhaps my writing was not clear. My point is that you are horribly unlikely to just guess a conjecture which is provable and interesting… unless you’ve imagined living in the relevant worlds for a while.”
            Aha – so now you’re weakening your claim from “you can’t just make up a theory” to “you can’t just make up a provable, interesting theory.” Noted – although, I suppose that I was supposed to just figure that out, too, despite the fact that it’s not what you said?

            At any rate, isn’t this in large part a matter of how long other people have, to use your phrase, “imagined living in the relevant worlds”? What’s “interesting” is not just a function of the nature of the field itself, after all, but also a function of how much progress has already been made in the field. So I might be able to get behind this idea for fields that are old or lines of reasoning that have been well-worn, but even then I would fail to see why this applies universally. A new area of inquiry should have lots of provable, interesting theories that you can just think up – indeed, it seems like it must, or else no new area of inquiry would ever be able to advance.

          • Anonymous

            The whole point here is to figure out what good LARPing is, not to name a good and then assign it to LARPing without really thinking about it.

            I’ll let the original post handle this:
            “The LARP exercise is my way of sidling up on my beliefs so I can notice a weak point [a contradiction perhaps?] on my side or a strength of the opponent before my defenses can engage.”

            This is the good that LARPing is.

            “We don’t usually know our opponent’s arguments as well as we thing, so imagining the argument/worldview from the inside sharpens our vision and helps us make sure that we’re addressing the actual strengths and weaknesses of our opponent.”

            This is where we find contradictions. This shapes our future conjectures… and thus, our future proofs and the resulting theorems. Quit ignoring things.

            I can only respond to what I see on the screen, right?

            Sure, but your insistence on a particular meaning of affect is odd. It can be interpreted both ways… even without assuming there is a typo. Regardless, “something makes me sad” wasn’t found on the screen until you put it there. Quit making things up.

            Aha – so now you’re weakening your claim from “you can’t just make up a theory” to “you can’t just make up a provable, interesting theory.”

            Awesome. I hope you love your nonprovable and noninteresting conjectures. I think it goes without saying that we’re assuming that we’re interested in provable and interesting (lolz) conjectures… that’s how we get to having theorems that matter. Of course, conjecture comes before proof even if it’s not provable or uninteresting. We can conjecture all we want, but if we don’t LARP, we’re probably going to end up with a lot more unprovable conjectures… and never get around to that proof stage… which means we won’t get around to the theorem stage. So my statement holds, even when we break it down into pieces.

            I suppose that I was supposed to just figure that out, too, despite the fact that it’s not what you said?

            I assumed that you understand that conjecture comes before proof when creating a theorem. спасибо.

            I like your statements about old/new fields. As a researcher, it’s really nice to see a new area of investigation where there is a lot of low-hanging fruit available. That being said, even when grabbing the low-hanging fruit, it’s helpful to think about what might be higher in the tree. You can better position yourself to continue climbing.

            Of course, this doesn’t contradict the conjecture-proof chronology. This happens regardless of whether the conjecture is new, old, interesting, or ridiculous. LARPing helps at the conjecture stage, and may aid in developing the proof stage. I’ve said repeatedly that it’s the not the same as proof (obviously), but it’s often as useful as airing up the tires.

            Finally, most lines of reasoning in philosophy of religion are old and well-worn… so my screen says that you could probably get behind the idea of LARPing in this case.

          • http://rustbeltphilosophy.blogspot.com Eli

            “I’ll let the original post handle this:
            “The LARP exercise is my way of sidling up on my beliefs so I can notice a weak point [a contradiction perhaps?] on my side or a strength of the opponent before my defenses can engage.””
            This is fantastic! All you’ve done here is repeat Leah’s claim about what she believes LARPing is good for. Maybe this is news to you, but I disagree with that claim. Simply parroting it back to me is not helpful. And you accuse me of ignoring things! The gall of some people…

            “Sure, but your insistence on a particular meaning of affect is odd. It can be interpreted both ways…”
            The fuck? Who are you, Shawn Spencer? “I’ve seen it spelled both ways” is mean to be a joke on that show – you know that, right? There’s only one noun definition for “affect” and it’s the one that I used. Stop being such a clown.

            “Awesome. I hope you love your nonprovable and noninteresting conjectures. I think it goes without saying that…”
            Uh, but obviously it doesn’t go without saying: you had to say it in order for it to go, so it did not go without saying. I think the kids call this “moving the goalposts.”

            “We can conjecture all we want, but if we don’t LARP, we’re probably going to end up with a lot more unprovable conjectures…”
            You keep repeating this as though just repeating it is going to be enough to win me over. Why?

            “I like your statements about old/new fields. As a researcher, it’s really nice to see a new area of investigation where there is a lot of low-hanging fruit available. That being said, even when grabbing the low-hanging fruit, it’s helpful to think about what might be higher in the tree. You can better position yourself to continue climbing.”
            So, again, this is at best a pre-reasoning sort of activity. You don’t find contradictions that way – indeed, you don’t come to any particular findings that way. It’s just inspiration. Yes? If so, I’m perfectly fine with that – but it’s not what Leah said. For the nth time, if this is your strongest argument, you should disagree with her, too.

            “I’ve said repeatedly that it’s the not the same as proof (obviously)”
            Okay! So then will you finally admit that you disagree with Leah when she says stuff like: “It can be a bit hard to figure out if you’re accurately imagining what the world would be like if a claim were true. [So] Imagine Christianity (or whatever religion you’re thinking about) is a theological system in a work of fiction and you’re planning to LARP in that setting.” This is a proving sort of thing, because it asks a true/false question (namely, “what the world would be like if a claim were true”). She thinks LARPing can be used to develop accurate pictures of hypothetical worlds, and accuracy is not just a matter of inspiration or imagination. Right?

            “Finally, most lines of reasoning in philosophy of religion are old and well-worn… so my screen says that you could probably get behind the idea of LARPing in this case.”
            See, and this is why moving the goalposts is bad. You’ve now narrowed your scope to interesting and provable hypotheses, yes? But then that eliminates “old and well-worn” hypotheses like ones about “what the world would be like if Christianity were true,” because those are no longer interesting and haven’t been interesting for a long, long time.

          • http://rustbeltphilosophy.blogspot.com Eli

            (Because I can see your response coming: that should be, …meant to be a joke…)

          • http://last-conformer.net/ Gilbert

            Анонимус, юр комментс ар oлвеъс экцелент, бут эт зис пойнт ю’р юст феедин зе трол.

          • Anonymous

            I’ve come to the same conclusion as you, Gilbert.. it’s just taken me a bit longer. At this point, it seems like he’s just disagreeing with whether it’s possible to discover contradictions or strengths/weaknesses by LARPing. It’s as if I said, “I sometimes find that trying a Taylor series expansion is useful to see what the dominant terms look like,” and having someone else say, “I disagree.” I’m not sure that he can be convinced until he tries a expansion and actually comes up with an idea because of it (of course, not a proof.. but an idea that might help when going on to construct a proof… and for a particular problem, the method could be totally useless or even counterproductive). He can certainly ignore it when other people say that they’ve found it useful, but for the claim he wants to make, he’d need to provide a constructive proof for why the strategy cannot help, despite what seems like pretty useful results by others.

            Even saying, “Look, the results from this Taylor series expansion can be misleading,” doesn’t do the job… because this method of gaining ideas does not replace proofs, as repeatedly said by all parties involved.

  • deiseach

    You’re a lot politer about this than I am. I am so sick of the competitive, ‘who’s got the bigger manhood?’ model of ‘reasoning’ that Eli may (or may not) be advocating here. Boys, what about those of us who can’t whip it out and measure it with a ruler? Does that mean we don’t get to play?

    Sorry if this is too vulgar for the topic, but I really don’t see why hard thinking requires ‘in your face! see my big pointy sharp thing! my brain is sooooo much bigger than yours!’ confrontational posturing.

    Also, to address the point about “what difference would it make to our lives if we had evidence of sentient life on other worlds?”, some people think it would make a fair bit of difference, as being a slap in the face for claims of religion about the special place of humanity in the cosmos (I’m not saying that it would, I’m just saying that some people think it would). Now, maybe this would only apply if one had a faith in God/gods to be shaken by such a discovery, but since a great many of the people on this world do have such a faith, it would be a bit more than just “Ho-hum, I see the daGama probe has made contact with the ant-people of Regulus C – never mind that, is there anything good on telly tonight?”

    • http://moralmindfield.wordpress.com Brian Green

      Just because I know a paper on the discovery of ETI and religious responses, I have to share it!
      You are right that “some people think it would” shake religious belief – the big discovery of the research is that non-religious people (nearly 70%) THINK that religious people couldn’t handle the discovery, while religious people don’t think it would be that big a deal. (Some agree it will shake their faith, but less than 10%.)
      http://www.counterbalance.org/etsurv/PetersETISurveyRep.pdf

      • R.C.

        Yes, I’ve often wondered why the discovery of extraterrestrial intelligences would ever be supposed to be relevant to whether a person found it easy to believe in Christianity, Judaism, Hinduism, or whatever.

        Certainly any religion that has angels in it already believes in extraterrestrial (albeit non-biological) intelligences. So, now, there’d be some biological ones, which is to say: Ones more like us. Yawn.

        Perhaps the concern is that the human imagination can only absorb one “Sky Man” category at a time? But then that’s always been a faulty view of God. I don’t even think three year olds saying bedtime prayers think of God in quite that way.

        Anyway, C.S.Lewis pretty well exhausted all the options in his essay “Religion and Rocketry”:

        http://scientificintegrity.blogspot.com/2010/04/religion-and-rocketry-by-cs-lewis.html

    • http://rustbeltphilosophy.blogspot.com Eli

      “You’re a lot politer about this than I am. I am so sick of the competitive, ‘who’s got the bigger manhood?’ model of ‘reasoning’”

      ??

      Are you, perhaps, confusing my prose with my argumentation? If so, I have to suggest that you perhaps take a step back from your WASPy cultural presumptions and try to look at what I write from a different perspective.

      “Also, to address the point about “what difference would it make to our lives if we had evidence of sentient life on other worlds?”, some people think it would make a fair bit of difference, as being a slap in the face for claims of religion about the special place of humanity in the cosmos”

      This doesn’t really answer my question, though. First of all, I didn’t say that it would make no difference TO KNOW ABOUT or TO HAVE EVIDENCE OF alien life; that is a totally separate claim that has nothing to do with my argument (which you might have noticed if you weren’t too busy casting aspersions on my genitalia). What I said – and I’m quoting myself here, since you apparently are incapable of even that basic action – was: “let’s say that we’re talking about THE POTENTIAL EXISTENCE OF sentient alien species on other planets – is that really something that is at all analyzable in terms of its effect on my day-to-day existence?” (caps added for emphasis) See? Not the same thing.

      So, second, I don’t see how you’ve provided any evidence at all that the existence of alien species would change our day-to-day lives. Think about it (if you can): for all we know, there are alien species out there already, right? We don’t know that there are, but we also don’t know that there aren’t. But religious people are still managing to believe in human exceptionalism just fine. Ergo, for all we know, the existence of alien species has literally no effect on us.

      • R.C.

        Eli:

        In what sense do religious people believe in “human exceptionalism” that would be in any way challenged by the existence of more humans?

        I don’t mean that I think other life forms would have the same physical attributes as us. But presumably they’re either more like animal intelligences, in which case, what’s the big deal? Just more animals. Or, they’re as-or-more intelligent than us, in which case: They’re just more “humans,” however oddly shaped, so what’s the big deal?

        I suppose we could ask, do they stink as badly as we do when it comes to behaving rightly according to whatever moral codes they have? If so, then we’re unexceptional, morally. If not, then we’re exceptional only in a negative sense: Exceptionally unreliable at being good. Both possibilities are pretty plausible.

        But that’s old news for “religious people” if by “religious people” you mean Christians and Jews, at least. One must remember that the medieval cosmology placing earth at the center of the universe was viewed not in a positive way, but the converse: Earth was at the center because it was, so to speak, the a**hole of the universe. Its occupants were not regarded to be of any higher a rank in the Order of the Universe than a family of mice whose mouse-hole happened to be in the scullery of a royal palace, in a capital city, of a vast country. (Certain disparaging references to the medieval geocentric cosmology by modern writers show utter ignorance of how the medievals actually felt about that model; I guess they never read Dante.)

        Given all of that, which is ho-hum to religious folk and always has been, I don’t know what quite what kind of human exceptionalism you have in mind.

        • http://rustbeltphilosophy.blogspot.com Eli

          “In what sense do religious people believe in “human exceptionalism” that would be in any way challenged by the existence of [aliens]?”

          I don’t know, ask a religious person who believes this sort of thing. (Maybe more to the point, ask the commenter who actually brought this up!) I’m not one myself, and to be honest I find the idea incredibly silly, but I trust religious people when they say that they feel this way.

      • deiseach

        Not a WASP – you got the W part right, but I’m actually WCRC – White Celtic Roman Catholic.

        :-)

      • deiseach

        Why I am not impressed by talk of wisdom as a gladitorial combat or boxing match; it is meant to be delight, not bashing people around the head with your weaponized, hackles-raised reasoning technique.

        Proverbs 8: 22-31 (Wisdom speaks; emphasis mine):

        22 The Lord possessed me in the beginning of his ways, before he made any thing from the beginning.
        23 I was set up from eternity, and of old before the earth was made.
        24 The depths were not as yet, and I was already conceived. neither had the fountains of waters as yet sprung out:
        25 The mountains with their huge bulk had not as yet been established: before the hills I was brought forth:
        26 He had not yet made the earth, nor the rivers, nor the poles of the world.
        27 When he prepared the heavens, I was present: when with a certain law and compass he enclosed the depths:
        28 When he established the sky above, and poised the fountains of waters:
        29 When he compassed the sea with its bounds, and set a law to the waters that they should not pass their limits: when be balanced the foundations of the earth;
        30 I was with him forming all things: and was delighted every day, playing before him at all times;
        31 Playing in the world: and my delights were to be with the children of men.

        • http://rustbeltphilosophy.blogspot.com Eli

          Aaaaaaaaand the bible is not evidence.

    • http://Geeklady.wordpress.com GeekLady

      My favorite euphemism for this behavior is “comparing DVD collections”. But “who has the bigger pickup truck” works too.

      Re: on contacting sentient aliens
      These is where Catholicism really shines. They take questions like this entirely seriously and work out the implications without fuss or posturing. And probably five hundred or a thousand years later, the Catholic response to question of sentient alien life will be as roundly mocked as the angels on the head of a pin business is today.

      • deiseach

        Speaking of size matters, courtesy of the webcomic Scandinavia and the World, a link to a song about who has the bigger… alphabet.

        Warnings for: swearing, vulgarity, America-bashing, and really cheesy Euro-pop :-)

  • http://rustbeltphilosophy.blogspot.com Eli

    “It might be possible to overwhelm your ugh fields with a frontal assault, but I’m not very good at it. I’d love to have Eli post, here as a guest or on his own blog, about how he trains that strength and applies it.”

    Mkay – I can go into more detail about the weaponized thing later, and in fact I plan on doing so today (albeit in a different context), but as for the strength-training part it would help to have some examples to work with instead of having to make up some of my own off the top of my head. I guess don’t worry too much if you can’t come up with some – it’ll just take me a little more time to write the post – but yeah, I always find that it’s more helpful to start from specific cases.

    “If Eli got to pick, I think he’d rather fight a Christian interlocutor who’s curious instead of just aggressive.”

    Yes – but I would (and indeed do!) prefer one who’s curious *and* aggressive to one who’s just curious or one who’s just aggressive.

    • http://rustbeltphilosophy.blogspot.com Eli

      Ah hell – forgot to add:

      “I don’t think the LARPing strategy or the Leave a Line of Retreat exercise should be used to prove something to yourself.”

      Sorry, but I can’t see how this jives with what you said about it in that post:

      “It can be a bit hard to figure out if you’re accurately imagining what the world would be like if a claim were true. [So] Imagine Christianity (or whatever religion you’re thinking about) is a theological system in a work of fiction and you’re planning to LARP in that setting.”

      That seems to me like a truth-seeking, evidence-gathering kind of claim, the thing about “accurately imagining what the world would be like if a claim [such as Christianity] were true.” Granted, it isn’t THE question you’re trying to answer, but you are trying to answer A (and indeed a very important) question.

    • leahlibresco

      Great! I’m looking forward to it. You’re welcome to cross-post your guide to bulking up your rationality/argumentative prowess here if you like. I don’t know how much I can help with specific cases for you to explain, since I’m not exactly sure which skills you’re trying to practice.

  • Ted Seeber

    Actually, LARPING homosexuality and being female were two ways I verified Church teaching on sexuality in my 20s. Previous to that, despite being Cradle Catholic (or maybe because of it given the incredibly bad catechisis of the 1970s Charismatic Movement combined with the common American misunderstanding on the difference between Eros and Caritas) I was almost an autistic heterosexual predator. Realizing, after reading Evanglium Vitae, that sexuality had a purpose in evolution was a HUGE thing for me. LARPing that as female and homosexual characters helped me to bring the understanding that certain immoral actions were evolutionary dead ends for my DNA.

    • Alan

      Yeah, given some of your warped opinions on sexuality you might need to revisit that fantasy world and try it again.

      • Ted Seeber

        Been there, done that. What you see as “warped opinions on sexuality” I call “freedom from slavery to orgasm”

        • Alan

          Yep, and that is why you might want to revisit your fantasy world. Unless you were actually diagnosed as a sex addict, viewing orgasms as enslaving is quite an unhealthy psychological issue that probably contribute to your deranged beliefs regarding contraception as rape. You might want to get that checked out by professionals.

  • http://last-conformer.net/ Gilbert

    I think some hair-splitting might be helpful here.

    On one hand Eli claims that aggressive/weaponizing debate with high stakes for loosing will lead to the most rational evaluation of the claims. That part is of course bullshit; it is empirically clearly established that we are very bad at assimilating arguments for painful conclusions.

    But under that, there is a valid point: It actually does look like our rationality is fundamentally relational so we are much better at making arguments for our position than against it. And so far the fixes for the less effective side aren’t looking all that effective themselves. Given that, the best evaluation procedure would look more like friendly arguments, where every side is represented by people actually on it but the stakes of changing sides are kept as low as possible and doing so doesn’t feel like loosing. So the Less Wrong-y view of rationality as a fundamentally individual project seems misguided too.

    • http://rustbeltphilosophy.blogspot.com Eli

      “I think some hair-splitting might be helpful here.”
      This is pretty comical, given what comes next.

      “On one hand Eli claims that aggressive/weaponizing debate with high stakes for loosing will lead to the most rational evaluation of the claims. That part is of course bullshit; it is empirically clearly established that we are very bad at assimilating arguments for painful conclusions.”
      Aggression and painfulness aren’t connected in this way – but keep on splitting those hairs!

      “Given that, the best evaluation procedure would look more like friendly arguments, where every side is represented by people actually on it but the stakes of changing sides are kept as low as possible and doing so doesn’t feel like loosing.”
      Do you have an example? Cause I think that professional philosophy sort of works this way, except it is absolutely not “the best evaluation procedure.” It is, in fact, a fairly terrible evaluation procedure.

      • http://branemrys.blogspot.com Brandon Watson

        I take it you don’t know many professional philosophers outside the classroom.

        • http://rustbeltphilosophy.blogspot.com Eli

          Sorry, why would that be relevant?

          • http://branemrys.blogspot.com Brandon Watson

            I agree it can be confusing, so I will explain. There are these speech acts that normal people call ‘jokes’, which are ritual social interactions used for various social functions, including (among many others) relieving tensions, providing temporary respite in serious discussion, or making it easier to interact with unpleasant people; and these ‘jokes’ come in many different forms. One such ‘joke’ pattern is, given someone saying “X usually does this” to respond, “Apparently you don’t know X under such-and-such conditions.” A ‘joke’ is generally called a ‘good joke’ if it typically elicits a positive immediate reaction; and it is generally called a ‘bad joke’ if it is not the kind of speech act that can usually be expected to elicit a positive immediate reaction. The ritual social interaction is completed by what is usually known as “the response to the joke”. There are a number of “responses to the joke” that are taken to be legitimate completions of the ritual, including laughing, groaning, rolling one’s eyes, or, when the responding party judges it to be a ‘bad joke’, ignoring it completely. There are other possible “responses to the joke”; although in general “Why is that relevant?” is not usually considered an appropriate ritual response, although I believe anthropologists have found that among certain tribal cultures it is taken as permission to conclude that the person giving this ritual response is taking the immediate social situation more seriously than is reasonable.

          • http://rustbeltphilosophy.blogspot.com Eli

            “A ‘joke’ is generally called a ‘good joke’ if it typically elicits a positive immediate reaction; and it is generally called a ‘bad joke’ if it is not the kind of speech act that can usually be expected to elicit a positive immediate reaction.”
            Then it seems you have some fine-tuning to do with respect to your sense of humor…

      • http://last-conformer.net/ Gilbert

        Yes they are. If you seriously want to deny that the pain you would get from falling from your high horse would be greater than that of friendlier people correcting themselves, well at that point I think the audience can estimate the plausibility of that for itself and further talk won’t help.

        I don’t think this is the problem with modern professional philosophy. Natural science works that way to about the same extent philosophy does and is generally seen as an excellent evaluation procedure.

        • http://rustbeltphilosophy.blogspot.com Eli

          “If you seriously want to deny that the pain you would get from falling from your high horse would be greater than that of friendlier people correcting themselves, well at that point I think the audience can estimate the plausibility of that for itself and further talk won’t help.”

          This is simply amazing! You’re saying that you honestly think that you can psychoanalyze me over the internet without ever having met me and without knowing any significant information about me. Moreover, you think that you can do this so well that you will literally know me better than I know myself. And you think that *I’m* the one with chutzpah?!

          “I don’t think this is the problem with modern professional philosophy. Natural science works that way to about the same extent philosophy does”
          HA! Sure – cause, y’know, all those empirical, controlled, repeatable philosophy experiments are really driving the field forward like they do in the sciences. Don’t be ridiculous.

          • Brandon B

            Gilbert is not psychoanalyzing you over the internet, because he’s describing something that is basically true of everyone. If your discussion is adversarial, you are more likely to have an strong emotional attachment to your position, which would lead to being irrational. If, instead, you have a friendly discussion, or are already friends with the person you’re talking to, you’re more likely to take their position seriously and give it an honest appraisal.

            On a related note, please stop peppering your responses with insults and dismissive comments. I personally find it annoying, and it can’t be good for your attempts to be rational.

          • http://rustbeltphilosophy.blogspot.com Eli

            “Gilbert is not psychoanalyzing you over the internet, because he’s describing something that is basically true of everyone.”
            But how does he know that I’m strongly emotionally attached to my position? He has no idea whether that’s true or not – and neither, by the way, do you (not least of all because it’s not true).

            “If, instead, you have a friendly discussion, or are already friends with the person you’re talking to, you’re more likely to take their position seriously and give it an honest appraisal.”
            Or, alternatively, you’re more likely to take their position too lightly and give it more credit than it deserves, as we’ve already seen in this comment thread.

            “On a related note, please stop peppering your responses with insults and dismissive comments. I personally find it annoying, and it can’t be good for your attempts to be rational.”
            I’ll make you a deal: I’ll stop berating you people when you stop trying to read my mind over the internet. Fair? It’s not that I care what you find annoying – I find you annoying, but I’m still here trying to take you seriously, the least you could do is return the favor. But I have no way of disabusing you people of your misconceptions about me personally: you’re dead-set on ignoring all the relevant evidence there because it fails to comply with your predecided psychological paradigm. So let’s try to focus on areas where the relevant evidence is mutually available, shall we?

          • Brandon B

            I am currently in law school, and so I am preparing to spend an entire career arguing with people professionally. I just got done listening to veteran lawyers tell us their “war stories” from years of practicing law, and they unanimously advised us, “Don’t be an asshole.” When the trial is over, we still have to deal with each other in our professional lives, and you never gain anything by pissing people off. Even something as relevant as calling the other lawyer a liar is just too unprofessional for a serious lawyer to consider. I realize that arguing to convince a third party is somewhat different activity than arguing to convince the person you disagree with, but the argument is almost besides the point. The way that you are talking to me strikes me as very immature. The fact that you don’t care about whether you annoy me is a failure to respect me as a human being.

            I care that I annoy you, and I wish I didn’t annoy you, and I wish this were a more amicable conversation. The primary reason that I want our discussion to be friendly is that I generally would rather be friends with people than enemies, regardless of whether we disagree about philosophical matters.

          • http://rustbeltphilosophy.blogspot.com Eli

            “When the trial is over, we still have to deal with each other in our professional lives, and you never gain anything by pissing people off…”
            Pot/kettle etc. More helpfully, though, you and I very clearly have different priorities – I’m willing to sacrifice group cohesion for my access to the truth, and you apparently aren’t. That’s not a judgment in and of itself – although I’ll bet you can guess how I feel about it! – but it’s also not surprising, at least on my end, so it’s not likely to sway me one way or the other. Or, to paraphrase the immortal words of Michael Stipe: try to tell me something I don’t know.

            “The fact that you don’t care about whether you annoy me is a failure to respect me as a human being.”
            Yeah, and the fact that you aren’t willing to take me seriously unless my prose is like cotton candy indicates that you don’t respect me as a conversation partner. Again, I’m not trying to get into a conversation about which (if either) is worse, all I’m saying is that I already have reasons that are analogous to your reasons, so this is going to end in a stalemate.

            “The primary reason that I want our discussion to be friendly is that I generally would rather be friends with people than enemies, regardless of whether we disagree about philosophical matters.”
            Great – I’m not opposed in principle to us being friends. (I mean, what – you think I don’t disagree with any of my friends about anything?) But I somehow doubt that our potential friendship is going to get off to a good start if you insist on correcting me about how I feel about this or that issue.

          • http://branemrys.blogspot.com Brandon Watson

            So let’s try to focus on areas where the relevant evidence is mutually available, shall we?

            This is only a reasonable action if both parties are arguing in good faith and making a genuine effort to see the evidence the other side is presenting. If either party is arguing in bad faith, or being willfully stupid, this is an obviously irrational strategy, since it holds the discussion hostage to the person who can be most intellectually dishonest and unreasonable. If we accept your argument so far, we can’t assume that you are either reasonable or intellectually honest, since this would require us either to assume a generous principle of charity or to draw upon our pre-existing psychological paradigm; so the rational strategy is to ignore what you would considered mutually available rational evidence, and simply focus on the evidence we have available. And what is actually available here is not as charitable to you as you are claiming.

            (1) We have repeated instances of you responding not with actual argument but personal attacks.
            (2) We have an obvious overreaction to Gilbert’s statement, which you’ve obviously misinterpreted, as Brandon B pointed out. Gilbert didn’t say anything that can be interpreted as psychoanalyzing you (coming to that conclusion would require reading Gilbert’s mind over the internet) Rather, he was framing an abstract scenario and just so happened to have the misfortune (which no reasonable person could have foreseen) of framing it in second person. English allows this, of course; if you say to me that you think aggressive behavior towards one’s friends is often justified and I say, “Well if you are an anti-social person who wants to alienate his friends, that makes sense,” it would be an irrational response to claim that I accused you of being anti-social.

            Oh, wait; there I framed an abstract scenario in second person again. Whoops! I await your overreaction and a long list of accusations.

            (3) Then you did exactly the same thing to Brandon B’s attempt to explain that you had misunderstood Gilbert’s point and his relatively polite way of warning you that the evidence that your responses are part of a pattern of evidence that usually suggests either irrationality or bad faith: misinterpretation, overreaction, and accusations. And that’s not even counting the same pattern of misinterpretation, overreaction, and accusations in other parts of this comments thread.

            Of course, now that I’ve come along and am deliberately mocking you for repeatedly acting in ways that are typically irrational and pretending that this gives anyone any evidence that you are taking the question seriously as you claim to be, I have save you a great deal of time; instead of misinterpreting me first, you can simply jump to the overreaction and accusations.

            However, in the interest of discussion, let me try an experiment and see what happens if reiterate both Gilbert’s and Brandon B’s point in a different way, and see where it gets us.
            “If you seriously want to deny that the pain you would get from falling from your high horse would be greater than that of friendlier people correcting themselves, well at that point I think the audience can estimate the plausibility of that for itself and further talk won’t help. ”
            = (1) If some person, let’s call them person A, wishes to deny that the detriment of acting on the assumption that other people, let’s call them collectively B, are reasonable and honest until definitely proven otherwise, let’s call that ‘arguing in a friendly way’ is greater than the detriment of people who exhibit ‘arguing in a friendly way’ engaging in the activity of self-correction, then at that point other people, let’s call them collectively C, can reasonably assess this as evidence that A is not acting in a way suggestive of rationality and that further rational discussion is not possible.

            (Then Eli jumps in, saying, “How dare you psychoanalyze A!” At which point we get Brandon B.)

            “Gilbert is not psychoanalyzing you over the internet, because he’s describing something that is basically true of everyone. If your discussion is adversarial, you are more likely to have an strong emotional attachment to your position, which would lead to being irrational. If, instead, you have a friendly discussion, or are already friends with the person you’re talking to, you’re more likely to take their position seriously and give it an honest appraisal.”
            = (2) The statement indicated by (1) is not an example of psychoanalyzing A, but is instead the statement of a principle that generally applies. If A fails to exhibit ‘arguing in a friendly way’ this is typically evidence that A has a strong emotional attachment to A’s position, and is not being rational. If, on the other hand, A exhibits ‘arguing in a friendly way’, or actually has an emotional attachment to B as positive and worthy of certain kinds of respect-behavior, which we will call ‘being friends with B’, this is typically evidence of at least the attempt to argue rationally and in good faith.

            (And then Eli jumps in and says, “But how do either of you know anything about A?” At which point I, of course, jump in and start mocking Eli for engaging in irrational behavior in public where everyone can see the evidence of it, thus handily saving Eli any need to misinterpret arguments as attempts to mock him, by going straight for the mocking. What can I say? I feel like being an enabler today; if someone is in such a hurry to get to overreaction and accusation that he has to be misinterpret arguments that most people can follow perfectly well, well then, why not save him the trouble and give him the same arguments but in a form which he can correctly interpret as mocking him?)

          • http://rustbeltphilosophy.blogspot.com Eli

            “(1) We have repeated instances of you responding not with actual argument but personal attacks.”
            The f you do. You have repeated instances of me responding with BOTH actual argumentation AND personal attacks, but you don’t have even one example of me opting for the latter to the exclusion of the former.

            I’ll skip (2) to get to (3), since (2) is only meaningful if you’re right about (3).
            “(3) Then you did exactly the same thing to Brandon B’s attempt”
            How so? I dunno if you noticed this or not, but Brandon seems to have figured out what’s going on, which indicates to me that I was successful in communicating something to him. Moreover, I don’t see an argument here to the effect that I was wrong. So, your point is…?

            What’s worst about all of this, though, is that you continue to think in the charity paradigm even after you say that you’ve left it behind. To wit:
            “If we accept your argument so far, we can’t assume that you are either reasonable or intellectually honest, since this would require us either to assume a generous principle of charity or to draw upon our pre-existing psychological paradigm…And what is actually available here is not as charitable to you as you are claiming.”

            In other words, “If we reject the principle of charity as a good rule for debate, then we can’t trust you in a debate because you’re not acting charitably.” I hate to tell you this, Brandon W, but this is not exactly a persuasive argument. I know that I lose the charity/get-along/be-a-nice-guy game, you don’t have to remind me of that. What I want to know is whether I win the reasoning game, and so far I’m pretty sure I do.

          • http://branemrys.blogspot.com Brandon Watson

            (1) How so? I dunno if you noticed this or not, but Brandon seems to have figured out what’s going on, which indicates to me that I was successful in communicating something to him.

            I specifically noted that that was false by translating Brandon B’s original argument into different words; from which the reasonable conclusion arises that you have no actual ability to follow a course of reasoning unless spoonfed.

            (2) What’s worst about all of this, though, is that you continue to think in the charity paradigm even after you say that you’ve left it behind.

            I did not at any point indicate that “‘I’ve left it behind,” so we’ve got yet another evidence of your misreading. Actually, I have two paradigms, one for people whom preliminary evidence suggests are reasonable, which is category, “Reasonable People”, and then one for whom preliminary evidence suggests that they are incapable of following basic arguments, which is category, “Irrational Kooks”. Given certain ethical considerations, people for whom I lack sufficient evidence to make a preliminary judgment are treated as putative citizens of the Reasonable People category; which is a principle of charity. I never leave behind this principle of charity; everyone starts in the Reasonable Person box and only evidence accessible to the public moves anyone to the Irrational Kook box. I was simply signaling to everyone else — signaling is a thing primates do in general social interactions — that I judged that the evidence was sufficient for moving you from the Reasonable Person box to the Irrational Kook box, so that if there are any other primates who are considering doing so, we can do so together — social cooperation being a standard primate thing.

            And, since you’ve shown that you have some problems interpreting complicated speech acts previously, I will simplify things by making it all ambiguous: Yes, I am mocking you for acting like an irrational kook.

            The evidence for this was provided in the previous comment, not in the first part, in which I was merely mocking, but in the second part, in which I identified your two cycles of misinterpretation, overreaction, and accusation, and then actually laid out, in one possible form, correct interpretations of the passages you had obviously misinterpreted. And, quite noticeably, which you have avoided talking about altogether in your comment, which consists mostly of just re-stating your obvious misinterpretation, showing yet again that you can’t even grasp when an argument is being given, misinterpreting mockery as an attempt to make a persuasive argument rather than, I don’t know, mocking you, and then, ending, of course, with your boast:

            (3) “What I want to know is whether I win the reasoning game, and so far I’m pretty sure I do.”

            Yes, this is the standard assessment irrational people give of their own reasoning. Rational people typically check with other people before they make this assessment, to prevent the assessment from being overly influenced by bias; the irrational kook, operating on an unshakeable presumption of their own rationality, always interprets the evidence as evidence of “winning the reasoning game”. Indeed, talking about “winning” in the context of reasoning is a common symptom of irrationality; there is no winning or losing in reasoning. Or, if one must talk about winning, people win simply by doing it and lose by not doing it. This has been the usual way of thinking about it since it was first suggested in a dialogue by Plato about a man named Socrates; against certain groups of people called Sophists and Rhetors, who argued in order to win the game, that, to the limited extent there was any such winning in argument, being proven right is winning and being proven wrong is just a different form of winning, and that the only failure is not to engage in argument in order to come to a better understanding. Thus one wins in reasoning by actually reasoning; for instance, when someone obviously misinterprets someone else, by showing that there is another obvious interpretation than the one they gave. And one loses in reasoning only by refusing to reason; for instance, by simply restating their position and declaring themselves the winner.

            And, again, to avoid the trouble that comes with your repeated misinterpretations, I will be clear: (1) I am still claiming that there is objectively available evidence that you are behaving irrationally, which I previously pointed out; and (2) I have drawn from your comment here yet another evidence of it, which is that you have yet again failed to engage with the actual argument given, preferring simply to restate your original misinterpretation of Brandon B by psychoanalyzing him and reading his mind over the interent, and that your expressed a position on reasoning is one which has long been recognized as a rational fugue state, that is, a state in which irrational people guarantee that they will remain irrational by making it impossible for them to reason themselves out of their irrationality; and (3) I am still mocking you for it, because no one is such an obvious clown as the person who thinks that the goal of reasoning is to win, and achieves this ‘goal’ by declaring themselves the winner. Carry on, Polus, young colt, genius of the internet, giant of our age, Winner of the Reasoning Game (if you give me a mailing address and three cereal box proofs-of-purchase, I’ll send you your trophy, so you can show it as further proof); let’s hear about how much more rational you are than everyone else while you pick and choose phrases to respond to rather than address the arguments given.

          • http://rustbeltphilosophy.blogspot.com Eli

            …yeah, I rest my case with you, Brandon W. If you’re not the living reductio ad absurdum of this whole sordid enterprise, I don’t know who would be.

  • Gordon

    When I left religion it was partly because I realised that I’d been LAPRing the whole time.

  • V

    So… Eli? If your approach is entirely based on reason, what is your basis? Where do your axioms come from? You can’t derive axioms from reason alone; they have to come from somewhere.

    Most people (ie. the Natural Man, that is, one untrained in reason and logic) get them from their assumptions or spur of the moment gut feelings, then use a rude logic as a means to support them. Axioms are by nature things you can’t prove or disprove, so logic is a dull tool for analysis of the same.

    Christians get their axioms from their religion. So they are right out there in the open to be speared like ripe fruit. Philosophers get them from whatever philosophy they prefer. Thought experiments, which have had use in science (see ref Physics, Geometry, etc.) are a system developed by philosophers. Reason is untrustworthy if it is based on faulty axioms. If you have a superior method, I’m eager to hear about it.

    If you don’t have a way to vet axioms, and no set of principals, then you may move your belief-goal posts any time it is convenient for the argument. This is called sophistry. It is a great deal of furious cleverness with no substance. Our modern term for this is Critical Theory, and is simply a new name for an old way to destroy meaning.

    Did you know that soldiers use role playing for tactics exercises? So do emergency medical technicians and other rescue workers to plan for natural disasters and to fight terrorism. Not everybody who sits around a gaming table swigs beer and plays D&D. (Not that there is anything wrong with either!) If you set up your world correctly in the first place, you can learn a lot about various approaches to the world. Indeed, you can be pretty freeform and get some good things to test out later.

    Using a LARP as a testbed is like combining imagination and social interaction (thus you have controls for behavior of other humans) in a complex way to get feedback and enjoyment. Some people take the LARP world more seriously than others, so YMMV. But it is just another way of using the imagination to create a facsimile of some world or other. You seem to suggest that because this is an imaginative endeavor it is untrustworthy.

    If using the imagination as a tool is eminently untrustworthy, then you are saying that all thought and human endeavor is untrustworthy. Anything you learn from stories, novels and the like is untrustworthy.

    This assumption seems to lead to a dull and sterile thought process. The human mind is rigged to learn from stories.

    LARP combines story and actual experience to a high degree. It may be a fake world, but real people are still interacting with each other, and they wouldn’t be doing it if it weren’t somewhere based on some real experience. Perhaps not everybody in the group has an accurate picture of reality in their minds, but they each contribute a realistic corner of their personal lives into the game which adds to it’s vitality and accuracy. Just like acting is much more satisfying when you base it on similar emotional experiences you had in real life.

    If you can’t use the imagination, where do new ideas come from? Testbeds don’t necessarily have to reflect reality. They just have to spur the thoughts to new and interesting directions where they can be vetted later for accuracy. (And generally vetted by thought experiments, which use imagination!) Oh, and you can have fun along the way as well.

    So fine, call LARPing silly if you want. But be careful when it comes to narrowing the tools of the mind.

    And, to be fair, anybody and everybody is capable of self-delusion, no matter their beliefs, philosophy or scientific training. I find your assessment that LARPing is especially prone to self-delusion unconvincing. It is a tool like anything else. You can even use mathematical equations to support falsehoods.

    • http://rustbeltphilosophy.blogspot.com Eli

      “So… Eli? If your approach is entirely based on reason, what is your basis? Where do your axioms come from?”
      It depends on the axiom, right? Some of them are fairly hard to do without (either p or not-p), others are determined conventionally (a basket scored within the 3-point line is worth 2 points; one scored outside it is worth 3), still others are ones that I pick up from trusted sources (some hard science that’s over my head falls into this category), and there are probably other options. If you can be more specific, I can probably get you a more satisfying answer.

      “Axioms are by nature things you can’t prove or disprove, so logic is a dull tool for analysis of the same.”
      Uh, no? An axiom is something that’s taken for granted at the start of a reasoning process and that appears to be fundamental to that reasoning process, but that doesn’t mean that they’re impossible to disprove or dislodge.

      “Thought experiments, which have had use in science (see ref Physics, Geometry, etc.) are a system developed by philosophers. Reason is untrustworthy if it is based on faulty axioms. If you have a superior method, I’m eager to hear about it.”
      Please don’t conflate thought experiments with reason in the realm of philosophy.

      “Did you know that soldiers use role playing for tactics exercises?”
      Yeah, but that’s because they’ve already established some reliable boundaries for what makes a training exercise useful or not and are only interested in producing the most effective training possible. Again, this is precisely what’s missing in the case of philosophical LARPing: we come in knowing neither the proper boundaries nor the proper end result of the “training.” It’s apples and oranges.

      “Using a LARP as a testbed is like combining imagination and social interaction (thus you have controls for behavior of other humans) in a complex way to get feedback and enjoyment.”
      Right, and? I’m not against LARPing per se, if that’s what you’re getting at. It’s just that feedback and enjoyment don’t track the truth particularly well.

      “You seem to suggest that because this is an imaginative endeavor it is untrustworthy.”
      But the issue of trustworthiness never comes up in a traditional RPG setting! It would, I guess, be accurate to complain that D&D doesn’t give you a realistic picture of the world, but such a criticism would be inapt of D&D. It would not, however, be a similarly inapt criticism to make of epistemological LARPing.

      “If using the imagination as a tool is eminently untrustworthy, then you are saying that all thought and human endeavor is untrustworthy.”
      Oh goodness gracious, you can’t possibly mean this. Either that or you’re a cartoon character.

      “Anything you learn from stories, novels and the like is untrustworthy.”
      As knowledge, yeah. The “learning” that happens through art would more accurately be described as maturation (or, attempting to remove the normative aspect, personal development). That is “learning” of a sort, but it’s not a philosophical sort.

      “Perhaps not everybody in the group has an accurate picture of reality in their minds”
      Gee, really? People who pretend to hunt dragons and slay animated skeletons might not have an accurate picture of reality in their minds?

      “…but they each contribute a realistic corner of their personal lives into the game which adds to it’s vitality and accuracy. Just like acting is much more satisfying when you base it on similar emotional experiences you had in real life.”
      Man, I dunno who you roleplay with, but either they are outstanding storytellers or you have very low standards.

      “If you can’t use the imagination, where do new ideas come from?”
      Who said that you couldn’t use your imagination to come up with new ideas? What I’m saying is that your imagination isn’t a good way to *test* ideas.

      “They just have to spur the thoughts to new and interesting directions where they can be vetted later for accuracy. (And generally vetted by thought experiments, which use imagination!)”
      Again, please stop it with the thought experiment thing. They don’t work in philosophy, and to the extent that they work in other areas they only work as the prelude to a real epistemological method.

      “So fine, call LARPing silly if you want. But be careful when it comes to narrowing the tools of the mind.”
      If you want to defend this as a “tool of the mind,” why do you keep talking about how fun it is? Fun is not an epistemological result.

      “I find your assessment that LARPing is especially prone to self-delusion unconvincing. It is a tool like anything else. You can even use mathematical equations to support falsehoods.”
      Your logic here is screwy: the argument isn’t about what can be used to support falsehoods, it’s about what can be used to support truths. The fact that math can go wrong is, therefore, not relevant.

  • http://thoughtfulatheist.blogspot.com/ Jake

    It seems like we have pretty good anecdotal evidence that Eli’s approach of aggressive argument coupled with personal attacks and insults isn’t working, at least on commentors here. If his goal is to convince other people, it seems like he would be wise to re-evaluate his tactics- but if his goal is simply to arrive at the truth for himself, then ridiculuing ideas he doesn’t agree with seems like fair game. That said, aggression is not equivilant to insult. We have plenty of examples of aggressive debators who don’t take blatant potshots at their conversations partners (see Leah of this blog, or Dan from Camels with Hammers)

    I’m not a huge fan of this aggressive style, but I’ll roll with it:

    human reasoning faculties are practically weaponized in their forms and functions.

    Um… evidence? I’m not sure what it means for reasoning to be “weaponized”. If he means reason is necessarily aggressive, then he’s either misunderstanding the common vernacular of “reason” or of “aggressive”. Reason is inherently emotionless. If you introduce emotion to the equation, you’re getting farther from finding out true facts about reality, not closer. At best, you’ll discover true facts about yourself; at worst, you’ll pigeon hole yourself into being committed to your argument on principle because of the investment you’ve made arguing for it- vis a vis the choice blindness study Eli himself cited.

    “But because it’s unpleasant to be at the pointy end of the stick, we rarely turn these weapons on ourselves, which means that we rarely force ourselves to achieve a rational response.”

    What on earth prompts Eli to claim that turning these “weapons” (whatever that means) on ourselves forces us to achieve a rational response? The evidence is pretty clear- when challenged by facts that contradict our preconcieved notions, humans do everything in their power to avoid considering the facts. When challenged by aggressive and blatantly insulting statements, human beings tend to tune out the interlocutor doing the insulting. More to the point, if you need intellectual bullying to win an argument, then you’re doing reason wrong. It’s pretty transparent to claim you’re just aggressively pursuing truth while simultaneously making personal attacks designed to demean and discredit your debate partner. You’re pulling a reverse arument-from-authority: since you’ve so thouroughly discredited your opponent, nothing he says could possibly be worth listening to (I’m sure I’ve heard of a term/cognitive bias for this, just can’t seem to find it right now)

    We do this least of all when we assume the truth of an idea and engage in mere low-stakes, just-go-with-it play-acting – when, for instance, we want to build a relationship, or “curiously” explore an imagined world, or defend an idea that we attribute to ourselves (even if, as in the experiment, that attribution is fictitious).

    Really? Is he really claiming that we’re better at high-stakes reasoning than low-stakes reasoning? I would love to see some evidence for this, because that would pretty much flip my understanding of human psychology on its head. Let me put this as clearly as I can: for the purposes of rational argument, emotional attachment is a bad thing. Making something high-stakes doesn’t make it easier to suss out the implications, it makes it harder. The point of leaving yourself a line of retreat- or of LARPing the hypothetical world in which X is true- is to consider the conclusions of a given belief without all the emotional baggage we have attached to it.

    In short, the evidence indicates overwhelmingly that we reason best when we reason competitively, and that competitive reasoning (just like competitive anything else) usually only takes place when there’s someone else to compete with.

    Well it really depends what he means by “competitively”. If he means “take into account multiple viewpoints and decide which has the most merit”, then absolutely- you need to be exposed to the correct theory before you can realize it’s the correct theory, unless you want to go about re-deriving every fact about reality that humanity has come up with. But if he means “character assassination, belittling an opponent personally, and being dismissive of any theory that isn’t identical to the one you currently hold”, then he’s full of it- the evidence shows the exact opposite.

    Frankly, I’m still trying to parse out why exactly Eli has such a problem with thinking about the “hypethetical world in which X is true”. He’s correct that if X is some absurd or nonsensical belief, then spending any real time on this excercise would be a waste of effort. But most religions, while they might be wrong, aren’t absurd and nonsensical- at least not on the surface, not on the level that they can be dismissed without actually thinking about them.

    If Eli’s gripe is that he legitimately thinks that any system that posits a supernatural reality is rejectable as a basic premise, then I suppose that’s fine for him, so long as he recognized that most people don’t agree with him. Again, he could quibble with those people (and give actual evidence as to why all religion can be rejected out-of-hand), but given that they don’t share his view, it’s bizarre for him to claim that they shouldn’t think about what the hypothetical world in which religion X was true would look like. How else would you even concievably go about testing which world you live in?

    This is exactly how science works- we think about the hypothetical world in which theory A is true, and the hypothetical world in which theory B is true, and then we go do an experiment to see which world we actually live in. That theory B happened to be wrong- and therefore hypothetical world B doesn’t actually exist- doesn’t mean we could have rejected theory B before we did the experiment. Thinking we could is called hindsight bias, and it’s a great way to convince yourself that your philosophical system explains everything perfectly, even when it doesn’t.

    • http://rustbeltphilosophy.blogspot.com Eli

      “It seems like we have pretty good anecdotal evidence that Eli’s approach of aggressive argument coupled with personal attacks and insults isn’t working, at least on commentors here. If his goal is to convince other people, it seems like he would be wise to re-evaluate his tactics- but if his goal is simply to arrive at the truth for himself, then ridiculuing ideas he doesn’t agree with seems like fair game.”
      Quite.

      “Reason is inherently emotionless.”
      But I didn’t say “reason,” did I? I said “human reasoning faculties.” You appreciate the difference between the two, yes?

      “What on earth prompts Eli to claim that turning these “weapons” (whatever that means) on ourselves forces us to achieve a rational response? The evidence is pretty clear- when challenged by facts that contradict our preconcieved notions, humans do everything in their power to avoid considering the facts.”
      So I claim that self-criticism is an effective reasoning tool, and your rebuttal is, “External criticism is not an effective reasoning tool”? Do you see why I might not be convinced by this?

      “Really? Is he really claiming that we’re better at high-stakes reasoning than low-stakes reasoning?”
      I’m claiming that the ceiling is higher. The floor may be lower – and so if your reasoning capacities are below a certain threshold, it may be in your best reasoning interests not to follow my advice – but the ceiling is higher, and if you think that you’re serious about this then that should be the only thing that matters.

      “Well it really depends what he means by “competitively”. If he means “take into account multiple viewpoints and decide which has the most merit”, then absolutely”
      Glad we agree.

      “Frankly, I’m still trying to parse out why exactly Eli has such a problem with thinking about the “hypethetical world in which X is true”. He’s correct that if X is some absurd or nonsensical belief, then spending any real time on this excercise would be a waste of effort. But most religions, while they might be wrong, aren’t absurd and nonsensical- at least not on the surface”
      But how is “the surface” defined? It is, after all, not a fixed or static thing. Scientology is absurd on its face, why is Christianity not? Is it just this: “most people don’t agree with him”? If so, I’m not impressed.

      “it’s bizarre for him to claim that they shouldn’t think about what the hypothetical world in which religion X was true would look like. How else would you even concievably go about testing which world you live in?

      This is exactly how science works”
      Except, y’know, for the whole rest of the scientific method – y’know, the controlled, empirical, repeatable test part, which is designed to attempt to falsify a well-phrased hypothesis. But yeah, other than that? It’s “exactly” the same.

      • http://thoughtfulatheist.blogspot.com/ Jake

        But I didn’t say “reason,” did I? I said “human reasoning faculties.” You appreciate the difference between the two, yes?

        Indeed. But I’m saying that emotion corrupts the human reasoning faculties rather than enabling them. I’m not arguing for some platonic form that “human reason” is supposed to take- I’m saying cognitive biases are real things, and they’re made worse when we try to “weaponize” reason. Human reason is in fact diluted by emotional attachment.

        So I claim that self-criticism is an effective reasoning tool, and your rebuttal is, “External criticism is not an effective reasoning tool”? Do you see why I might not be convinced by this?

        Yes, I can see why you may consider it bad form to conflate the two, if that’s what you think I’m doing. But I’m not at all claiming “criticism is not an effective reasoning tool”- in fact, it’s one of our only good reasoning tools. If we’re not allowed to look for weak spots period, we don’t have much hope of finding the truth.

        But you seem to be applying the axiom “we need to be critical of beliefs” and arriving at the conclusion “…so I can basically say whatever I want in the course of an argument, and as long as I’m aggressively critical enough, reason will win out in the end”. I suppose if you’re just applying this to yourself (and as I now reread it, it seems like that’s what you’re advocating for, if not what you’re actually doing) then I suppose you should be as aggressive as you can handle. The problem is that if you ever do it outside your own head (as we’ve seen in this comment thread), it prevents you from interacting profitably with basically everyone else on the planet. You might be comfortable with this style of ultra-aggressive argument, but most people aren’t. Perhaps you are smart enough that you will not learn anything by interacting with other smart people anyway- but I am not.

        I’m claiming that the ceiling is higher. The floor may be lower – and so if your reasoning capacities are below a certain threshold, it may be in your best reasoning interests not to follow my advice – but the ceiling is higher, and if you think that you’re serious about this then that should be the only thing that matters.

        Sorry, what? What’s this about reasoning ceilings and floors? Are there walls too? And where can I sign up for a reasoning trampoline?

        You seem to be saying that, assuming we meet some threshold of intelligence, then we’re better at reasoning when we’re more invested in the outcome. So far as any cognitive science I know of goes, that’s just flat out wrong. We might be more attentive if the stakes are higher, but we’re decidedly worse at reasoning. The more invested we become in an idea, the more our identity is tied up in being a “Christian” or “Liberal” or “American”, the more we’re willing to do to protect a bad belief. Do you actually disagree with this sentiment, or am I wildly misunderstanding you?

        But how is “the surface” defined? It is, after all, not a fixed or static thing. Scientology is absurd on its face, why is Christianity not? Is it just this: “most people don’t agree with him”? If so, I’m not impressed.

        So I take it this is your real complaint, that Christianity is so absurd that even thinking about the way the world would look if it were true is so nonsensical as to make your reasoning explode? Lots of true things are absurd on their face- quantum mechanics being the tried and true example. I’m not saying we should give every absurd-sounding theory equal standing, but I am saying that the fact that billions of people believe something is evidence. It’s a fact about reality. It doesn’t make any of it true, but it sure seems like it makes it worth figuring out what part of it exactly you reject, rather than saying “haha, silly religious people. Gods are for non-feasible universes!” But then, I’m more of an empiricist than a rationalist anyway, so perhaps this is a point we’ll simply have to disagree on.

        Except, y’know, for the whole rest of the scientific method – y’know, the controlled, empirical, repeatable test part, which is designed to attempt to falsify a well-phrased hypothesis. But yeah, other than that? It’s “exactly” the same.

        Yep. You caught me. I didn’t condense all of scientific theory into a single paragraph. I just stuck to the part that was relevant to the point I was making.

        And I doubt it’s a point you’ll disagree with- this is how science works. Certainly it’s not all of science- there’s the stuff you mentioned plus lots more about methodology, statistical significance, and which scientific journals you can manage to get published in. But I’m sticking to my guns here- at its heart, science is about making a prediction based on a hypothetical world in which a theory is true, and testing whether or not that prediction comes true. Science could not work unless we had competing hypotheses- competing hypothetical worlds. I’m definitely not claiming that LARPing is science; I’m saying it’s a form of empiricism. Try it out and see what happens is the purest form of truth-seeking there is (again: empiricist).

        For the record, I’m on board with your point about controlled, empirical, repeatable tests. This is where religion fails for me- but not before. I don’t reject religion because it doesn’t deserve consideration as a theory about reality, I reject it because I’ve looked at it and I find it to be a bad theory, a theory that simply doesn’t fit the evidence.

        • http://rustbeltphilosophy.blogspot.com Eli

          “Indeed. But I’m saying that emotion corrupts the human reasoning faculties rather than enabling them.”
          Sorry, but you don’t have a choice: you can either use your emotions to help you reason, try to ignore them, or let them get in the way, but what you cannot do is excise them from your reasoning altogether.

          “I’m saying cognitive biases are real things, and they’re made worse when we try to “weaponize” reason. Human reason is in fact diluted by emotional attachment.”
          You don’t try to weaponize them. They are weaponized already; they come that way out of the box. The swords-to-plowshares thing is a lovely idea, but it only works if you understand that you start off with a sword.

          “But you seem to be applying the axiom “we need to be critical of beliefs” and arriving at the conclusion “…so I can basically say whatever I want in the course of an argument, and as long as I’m aggressively critical enough, reason will win out in the end”.”
          Bzzt – wrong. My axiom is that people who don’t take this stuff seriously aren’t worth my time.

          “The problem is that if you ever do it outside your own head (as we’ve seen in this comment thread), it prevents you from interacting profitably with basically everyone else on the planet.”
          See, and this is how I know that your theory is full of it: this is an empirical prediction that you’ve just made, and I’ve already falsified it many times over. (We’re falsifying it right now, for crying out loud!) Everybody and their mother tries to warn me about this, but you’re just crying wolf.

          “What’s this about reasoning ceilings and floors? Are there walls too? And where can I sign up for a reasoning trampoline?”
          Ho ho! That’s funny, cause it’s an intentional attempt to avoid my point. Let me try again, with an analogy that’s worse but that is more familiar to Leah’s readers: the local maximum rationality in a friendly reasoning system is lower than the local maximum in a competitive reasoning system (which I take to be the global maximum), but the local minimum in the former is less low than the local minimum in the latter.

          “You seem to be saying that, assuming we meet some threshold of intelligence…”
          Nope: intelligence, in the sense I think you mean, is talent; I’m talking about skill.

          “The more invested we become in an idea, the more our identity is tied up in being a “Christian” or “Liberal” or “American”, the more we’re willing to do to protect a bad belief. Do you actually disagree with this sentiment, or am I wildly misunderstanding you?”
          I do actually disagree with this sentiment – and you wildly misunderstand me. Don’t feel bad, though, apparently everyone else does, too. The idea that I’m invested in, first and foremost, is logic. Not logic in the colloquial sense, logic in the sense of formal, deductive, rule-based symbolic logic. I am not first and foremost an atheist or a liberal or a Spurs fan; first and foremost, I’m a logician. If that opens me up to bias, there are only two possibilities: either I’m biased in favor of something that’s logical, or I’m misunderstanding logic. Every so often the latter does happen, but I think it’s fairly easy to see that this is a much better situation than the one I’d have if I were primarily a liberal (or whatever).

          “So I take it this is your real complaint, that Christianity is so absurd that even thinking about the way the world would look if it were true is so nonsensical as to make your reasoning explode?”
          No, actually, that’s almost the inverse of my point. I don’t think that there’s anything that’s nonsensical enough to make my reasoning explode – my reasoning is designed to make sense of things, not to make nonsense of things. What I’m saying is that it’s so nonsensical that I can only possibly make sense of it without sacrificing what I try my hardest to be, i.e., a logician.

          “I’m not saying we should give every absurd-sounding theory equal standing, but I am saying that the fact that billions of people believe something is evidence. It’s a fact about reality. It doesn’t make any of it true, but it sure seems like it makes it worth figuring out what part of it exactly you reject”
          Okay, see, now this is just insulting. What makes you think that I haven’t done this? What I’m saying is that I, as an individual, don’t have an obligation to keep on doing this every time the subject comes up – and that, for the same reasons, we as a species don’t have an obligation to keep on rehashing the same arguments over and over again forever. This, again, is what happens in professional philosophy, where some people are still trying to beat the Euthyphro dilemma or bridge the is/ought gap. That does not work; it cannot work. But do you know how I figured out the parts I disagreed with? Not by playing pretend, that’s for sure.

          “Yep. You caught me. I didn’t condense all of scientific theory into a single paragraph. I just stuck to the part that was relevant to the point I was making.”
          Um, yes, that’s precisely the problem. Your argument by analogy was, “These two things are exactly the same, except for this hugely important part of the second one.” Just because you remembered to point out the flaw in your own argument doesn’t make it not a flaw, somehow.

          “I’m sticking to my guns here- at its heart, science is about making a prediction based on a hypothetical world in which a theory is true, and testing whether or not that prediction comes true. Science could not work unless we had competing hypotheses- competing hypothetical worlds.”
          Okay, but again, this is not what I disagree with, and it never has been. What I am saying about epistemology – all I have ever been saying about epistemology, in this thread – is that the hypothetical world itself is not and cannot be evidence. Do you disagree with that, or not? Because if you don’t, then you are on my side and you should be asking Leah why she thinks that thought experiments alone are enough to generate useful epistemological results.

          “For the record, I’m on board with your point about controlled, empirical, repeatable tests. This is where religion fails for me- but not before. I don’t reject religion because it doesn’t deserve consideration as a theory about reality, I reject it because I’ve looked at it and I find it to be a bad theory, a theory that simply doesn’t fit the evidence.”
          Seriously? So there’s no level of incoherence or self-contradiction at all that would get you to reject a theory before trying to test it empirically?

  • http://thoughtfulatheist.blogspot.com/ Jake

    Apologies for the length. In the future I shall try to pick my fights more carefully.

    Sorry, but you don’t have a choice: you can either use your emotions to help you reason, try to ignore them, or let them get in the way, but what you cannot do is excise them from your reasoning altogether
    …er? I’m claiming that you flat out cannot “use your emotions to help you reason”. Your emotions are detremental to reason, full stop. Our goal should be to to excise them from our reasoning altogether. Just because we’re unlikely to ever accomplish it fully doesn’t mean we should throw up our hands and embrace emotionally charged psuedo-rationalism, just like the fact that we will probably never end poverty completely doesn’t mean we should throw up our hands and embrace income inequality, poorly designed tax codes, etc etc. Emotion is a bug in our reasoning capabilities, not a feature, and we should do our best to mitigate it, not embrace it.

    You don’t try to weaponize them. They are weaponized already; they come that way out of the box. The swords-to-plowshares thing is a lovely idea, but it only works if you understand that you start off with a sword.

    You don’t seem to be arguing for swords-to-plowshares at all. You’re arguing for “swords, more swords, sharper swords, bigger swords!” I agree with you that human’s reasoning capabilities come out of the box malformed (I’m still not sure I’m reading “weaponized” the way you’re meaning it to come across), but the solution to this is not proliferation, it’s disarmament. We don’t need more people running around with bigger weapons to bludgeon each other with, we need more people recognizing cognitive biases and setting them aside for calm, reasoned discussion. I’ve said it before and I’ll say it again- elevating the level of aggressivness in a debate just locks you further into your position. If you’re incapable of ever breaking on the floor, even when you’re wrong, then you’re not actually doing reasoning, you’re doing rationalization.

    Bzzt – wrong. My axiom is that people who don’t take this stuff seriously aren’t worth my time.

    You’re using quite a wonky definition of “seriously”. Your contention is that anyone who doesn’t come at philosophy from the exact same angle as you isn’t taking it seriously? Or even more extreme, that anyone who isn’t comfortable with personal attacks as part of debate is so far below you as to not be worth your time?

    See, and this is how I know that your theory is full of it: this is an empirical prediction that you’ve just made, and I’ve already falsified it many times over. (We’re falsifying it right now, for crying out loud!) Everybody and their mother tries to warn me about this, but you’re just crying wolf.

    Ah, so clever- take the fact that someone is actually engaging with you to tell you your standards of engaging people are bad to show that “look! People are willing to engage with me!” I suppose by the strict definition, you’re right, I am engaging with you. But I’m only engaging because the topic is your method of debate. I’ve avoided engaging with you in the past (you wrote a takedown piece of one of my posts as well, IIRC), and I plan to continue to do so in the future, because it’s just not profitable to argue with people like you. Same reason I’m never again commenting on Standing on my Head.

    Seriously, what’s your success rate here? 1/20? 1/50? You seem to be saying n-1/n people just aren’t worth your time. But you have to pick one and stick with it- are you claiming that you’re alienating almost everybody and you just don’t care, or that you’re not alienating almost everybody? You can’t have it both ways.

    Ho ho! That’s funny, cause it’s an intentional attempt to avoid my point. Let me try again, with an analogy that’s worse but that is more familiar to Leah’s readers: the local maximum rationality in a friendly reasoning system is lower than the local maximum in a competitive reasoning system (which I take to be the global maximum), but the local minimum in the former is less low than the local minimum in the latter.

    Again, evidence please! This goes against everything I’ve ever learned or observed about human psychology. Becoming more aggressive and more attached to our ideas makes us less likely to find the truth, not more likely.

    Funnily enough, local maximums was going to be my next point. If we view rationality as a 3 dimensional surface for which we’re trying to find the global maximum, LARPing is the mutation perturbation in the genetic algorithm. If we don’t send out values beyond the local maximum we’ve already established, we have no chance of finding a better maximum. Even if we do mutate our values a little bit- perhaps you take seriously people who think in a very simillar but not exactly identical way to you- if we don’t send out values that are far enough away from our current local maximum, then it’s really easy to get trapped in a suboptimal location. LARPing is a method of sending a generation into the abyss and seeing what happens. Likely they will die- but possible that they will discover a new local maximum better than our current one.

    The idea that I’m invested in, first and foremost, is logic… I am not first and foremost an atheist or a liberal or a Spurs fan; first and foremost, I’m a logician. If that opens me up to bias, there are only two possibilities: either I’m biased in favor of something that’s logical, or I’m misunderstanding logic. Every so often the latter does happen, but I think it’s fairly easy to see that this is a much better situation than the one I’d have if I were primarily a liberal (or whatever).

    Come now, as if humans can only belong to one group at a time? Being a “Logician” doen’t in any way gaurantee that you won’t also be a “Liberal”, and doesn’t afford you any protection from the biases that come with being a liberal. I’m sure you’ve read the same studies I have- humans are natural group-formers. Claiming that you belong to one and only one group, and will experience cognitive biases for only that group, is naive at best, and it’s own cognitive bias at worst.

    Also: go Lakers.

    What I’m saying is that it’s so nonsensical that I can only possibly make sense of it without sacrificing what I try my hardest to be, i.e., a logician.

    Praying for 2 minutes a day to a God that you’re 99.99…% sure isn’t there will cause a seg fault in your logician mechanism? Give me a break. No one is asking you to sacrifice your logician- bring him with you! (I’m using the royal you, of course; I’m not asking you to participate in the experiment). If your logician looks around this pretend universe and says “wait a minute, this doesn’t work”, that’s evidence- but I don’t see why you’re so adament that you shouldn’t even bother looking around in the first place. If you pray for 2 minutes a day for a month and at the end of it, your logician is still looking at you like you’re nuts, then congratulations: you were right, and you’ve got good reason to lower your estimate on how likely Christianity is to be true.

    Okay, see, now this is just insulting. What makes you think that I haven’t done this?
    Um… maybe because you’re telling other people not to do it?

    What I’m saying is that I, as an individual, don’t have an obligation to keep on doing this every time the subject comes up

    That is a totally fair point. I don’t think you have an obligation to keep investigating something ad infinitum either. But this is not what I saw Leah advocating for in the OP.

    and that, for the same reasons, we as a species don’t have an obligation to keep on rehashing the same arguments over and over again forever.

    You really think we as a species have come to a conclusion on religion and discarded it? You must live on /r/atheism

    But do you know how I figured out the parts I disagreed with? Not by playing pretend, that’s for sure.

    What you call “pretending”, I call an experiment. Most of the claims of Christianity (God answers prayer/God wants a personal relationship with you/God can save you from your sins) are only experientially verifiable. I fail to see why trying it out and realizing “hey, this doesn’t work” is such a great evil.

    Um, yes, that’s precisely the problem. Your argument by analogy was, “These two things are exactly the same, except for this hugely important part of the second one.” Just because you remembered to point out the flaw in your own argument doesn’t make it not a flaw, somehow.

    When I say “Combustion is exactly how a car engine works!” that doesn’t mean I’m saying fire and car engines are the same thing. I’m saying combustion is the mechanism by which a car engine accomplishes its function. Likewise, positing hypothetical worlds and testing to figure out which world we live in is the mechanism by which science accomplishes it’s function- namely, finding out facts about reality. If your only complaint is that I used the word “exactly”, I’m happy to redact it if it brings us to agreement.

    the hypothetical world itself is not and cannot be evidence. Do you disagree with that, or not? Because if you don’t, then you are on my side and you should be asking Leah why she thinks that thought experiments alone are enough to generate useful epistemological results.

    Yes, I disagree with that, assuming we mean the same thing by “evidence”. I think thought experiments are useful for arriving at conclusions that are not intuitively obvious from the axioms of a system. Thought experiments don’t generate new reality, but they help us figure out the implications of the things we already know (or think we know). Look at it this way- I’m not smart enough/observant enough to find every single contradiction between my beliefs and reality- and I simply have too many beliefs to brute force the problem. Thought experiments are a tool to help me find those contradictions.

    Seriously? So there’s no level of incoherence or self-contradiction at all that would get you to reject a theory before trying to test it empirically?

    Of course there is. Or, more accurately, I’ve got some threshold of Bayesian belief (say 1%) under which I won’t devote any effort to testing theories, because I’m already really sure they’re wrong. The fact that several billion people belief in Buddhism, Islam, and Christianity pushes my Bayesian estimate for those three higher than that threshold, so I’m willing to investigate them in a way that I’m not willing to investigate the thousands of other religions.

    • http://thoughtfulatheist.blogspot.com/ Jake

      erg! Comment hierarchy, you’ve failed me!

    • http://rustbeltphilosophy.blogspot.com Eli

      “Also: go Lakers.”
      AHA! This explains everything.

      “…er? I’m claiming that you flat out cannot “use your emotions to help you reason”. Your emotions are detremental to reason, full stop. Our goal should be to to excise them from our reasoning altogether. Just because we’re unlikely to ever accomplish it fully doesn’t mean we should throw up our hands and embrace emotionally charged psuedo-rationalism”
      I completely disagree. You seem to think that I mean that we should use emotion as reasoning – hence the “pseudo-rationalism” thing, right? But that’s not what I mean. What I mean is that we – or, at least, some of us – should use emotion to motivate reasoning, albeit only in a very specific way (namely, the way that e.g. athletes use emotion to motivate their training and competition).

      “You don’t seem to be arguing for swords-to-plowshares at all.”
      Well, no, but you sort of do, was my point.

      “the solution to this is not proliferation, it’s disarmament”
      Whereas I think it’s training.

      “We don’t need more people running around with bigger weapons to bludgeon each other with, we need more people recognizing cognitive biases and setting them aside for calm, reasoned discussion.”
      Part of this is that you’re missing the game theory – how well do civil rights causes work in the absence of cognitive bludgeons? – but part of it is, again, that you’re not responding to my actual position. Which, granted, this whole comment was made before I corrected you about that, but still.

      “If you’re incapable of ever breaking on the floor…”
      Again, you’re making a false assumption here, which does no credit to your hypothesis about me. As an empiricist, you should really be taking this to heart more.

      “You’re using quite a wonky definition of “seriously”.”
      Actually, I think you all are. I’m using the same definition that works in cases like “serious chess player,” whereas you’re using the same definition that works in cases like…what, exactly?

      “Your contention is that anyone who doesn’t come at philosophy from the exact same angle as you isn’t taking it seriously? Or even more extreme, that anyone who isn’t comfortable with personal attacks as part of debate is so far below you as to not be worth your time?”
      Nnnnnno. My contention is that anybody who threatens to walk away when I call their stupidity “stupidity” is not worth my time.

      “I suppose by the strict definition, you’re right, I am engaging with you.”
      Moreover, you were only my case in point. This sort of thing happens surprisingly often – I mean, surprisingly even to me – so it’s not really convincing to say that it doesn’t happen or doesn’t happen in general. What’s also not convincing is you trying to explain away your participation: your empirical theory didn’t make allowances for that, so obey your (alleged) empiricist principles and bite the bullet on this one.

      “Seriously, what’s your success rate here? 1/20? 1/50? You seem to be saying n-1/n people just aren’t worth your time. But you have to pick one and stick with it- are you claiming that you’re alienating almost everybody and you just don’t care, or that you’re not alienating almost everybody?”
      How the fuck would I know? Do you know your “success rate”? (Seriously: do you?) Furthermore, how would I know whether people aren’t responding to me specifically because I’ve alienated them? The whole thing about people not responding is that I don’t get any information about them. I do happen to think that most people don’t have anything to offer on a lot of questions – not all questions, just a lot of questions – but I really don’t have the slightest clue how many people I alienate. All I know is that I manage to have constructive conversations about almost every question anyway, and so it’s not remotely plausible to say that I alienate basically everyone – because, after all, the point is not how many people don’t talk to me, it’s how many do and how challenging those conversations are.

      “Again, evidence please! This goes against everything I’ve ever learned or observed about human psychology.”
      Okay – go ask any high-level chess player whether they’re emotionless machines or if their emotions were part of what allowed them to become excellent.

      “LARPing is the mutation perturbation in the genetic algorithm”
      Yeah, anon up above seems to have made a similar point. I’ve conceded this in the past at my blog and I conceded it again here. But, um, that’s not all that Leah was saying about it: she was also saying that it’s a good way of obtaining some kind of accuracy about a theory, and that I don’t agree with. The one does not imply the other.

      “Come now, as if humans can only belong to one group at a time?”
      …which would be why I made sure to specify that I’m FIRST AND FOREMOST (or PRIMARILY) a logician. As you’re well aware, “first and foremost” =/= “only”. And I’m not saying that there’s no cross-contamination, but I am saying that the logician’s dilemma is of a different sort than the others, in a way that causes problems for what you were saying.

      “Praying for 2 minutes a day to a God that you’re 99.99…% sure isn’t there will cause a seg fault in your logician mechanism?”
      Mine, personally? Or mine, as a general means of speaking? Because I guess that this particular thing wouldn’t affect me too much, but it sure as shit took its toll on Leah; more generally, I rather suspect that everybody has some issues on which they’d be wrongly swayed if they went in for this sort of thing. But okay – if you don’t think that this is a problem, here’s my challenge to you: every day, spend 30 minutes trying to see the upside of racism and pretending to be racist. Then, in a year, tell me if you don’t notice some worryingly racist tendencies beginning to creep into your reasoning and behavior. I mean, it shouldn’t be a problem, right? You can bring your whatever along with you, and then at the end you can say that you’ve really defeated racism. Wouldn’t that be nice?

      “I don’t see why you’re so adament that you shouldn’t even bother looking around in the first place. If you pray…”
      But praying isn’t “looking around.” It’s praying. Anyway, when was the last time you did this with Scientology? Or creationism? Or whatever?

      “You really think we as a species have come to a conclusion on religion and discarded it?”
      Well, like I said (to someone or other), the Euthyphro argument is still going strong even though it was resolved at the time it was written. So…yeah, kinda.

      “Most of the claims of Christianity (God answers prayer/God wants a personal relationship with you/God can save you from your sins) are only experientially verifiable.”
      What?! How in the name of all that is fuzzy in this world are “God wants a personal relationship with you” or “God can save you from your sins” experientially verifiable? The prayer thing I’ll give you, cause either it’s incoherent or it’s already been falsified, but that’s hardly a falsification of Christianity as such.

      “When I say “Combustion is exactly how a car engine works!” that doesn’t mean I’m saying fire and car engines are the same thing. I’m saying combustion is the mechanism by which a car engine accomplishes its function. Likewise, positing hypothetical worlds and testing to figure out which world we live in is the mechanism by which science accomplishes it’s function- namely, finding out facts about reality.”
      So, once again, your argument by analogy was: “These two things are exactly the same – meaning, they have the same functional mechanism – except for one has testing and the other doesn’t.” Still not convincing.

      “I think thought experiments are useful for arriving at conclusions that are not intuitively obvious from the axioms of a system.”
      Okay – so, just for the sake of argument, give me an example. Because, again, this is not how thought experiments work in scientific methodology: there, thought experiments are not used to arrive at conclusions.

      “The fact that several billion people belief in Buddhism, Islam, and Christianity pushes my Bayesian estimate for those three higher than that threshold”
      On what basis? What’s the series of steps that gets you from A to B? Also, again, could any idea whatsoever merit further attention just by being sufficiently popular? (Like, have you checked out ghosts? Apparently lots and lots of people believe in ghosts.)

      • http://thoughtfulatheist.blogspot.com/ Jake

        It seems like our argument has pretty much run its course. I’m going to respond to any of the things I thought were questions (plus one more fight I had to pick), and call it a day.

        how well do civil rights causes work in the absence of cognitive bludgeons?

        The prescription of “calm, rational discussion” works a heck of a lot better than trying to yell louder than the other guy. Sure, there are cases where the other side isn’t willing to be calm and rational, and you have to decide the optimal way to respond (which may or may not include aggressive behavior), but I’m saying that playing the role of the guy not willing to be calm and rational from the get go is the wrong way to do it- by which I mean it’s a sub-optimal strategy for getting what you want.

        Actually, I think you all are. I’m using the same definition that works in cases like “serious chess player,” whereas you’re using the same definition that works in cases like…what, exactly?

        I’m using it as a proxy for “people who legitimately care about the result”. You can play chess seriously without being a “serious chess player”. But I think this is a bad analogy to what you’re doing- what you’re doing seems more akin to only listening to people from the Classical school of chess, while ignoring anything people from the Russian Dynamism school have to say (yes, I just googled those. I have no idea what they mean). There are people who are VERY serious about philosophy, who have dedicated their whole lives to it, who you’re disqualifying because they’re coming from a different school from you.

        This sort of thing happens surprisingly often – I mean, surprisingly even to me – so it’s not really convincing to say that it doesn’t happen or doesn’t happen in general. What’s also not convincing is you trying to explain away your participation: your empirical theory didn’t make allowances for that, so obey your (alleged) empiricist principles and bite the bullet on this one.

        Perhaps your theory works more often than I think it does- you certainly know how your interactions go a lot better than I do. But judging purely from the content of the surrounding comment threads, I’m not convinced of your claim. As for me explaining away my participation: as I said before, your approach has stopped me from interacting with you in the past, and I very much expect it to stop me from interacting with you in the future. That’s empirical evidence too. My theory doesn’t predict that everyone will always ignore everything you say- trolling works, after all, and those people are actively trying to piss other people off. My theory says you will get substantially fewer useful conversations out of your model than I will get out of mine, and I haven’t seen anything in this comment thread to dissuade me from that.

        How … would I know? Do you know your “success rate”? (Seriously: do you?)

        I was referring specifically to this comment thread. I just didn’t want to go through and count up positive and negative responses. Based on my previous reading of the comments here, I would estimate your success rate ((people who take the time try try and have a discussion with you)/(people who just comment to tell you you’re an ass)) at around 1/15, though I admittedly haven’t read anyone else’s comments for the last two days.

        if you don’t think that this is a problem, here’s my challenge to you: every day, spend 30 minutes trying to see the upside of racism and pretending to be racist. Then, in a year, tell me if you don’t notice some worryingly racist tendencies beginning to creep into your reasoning and behavior. I mean, it shouldn’t be a problem, right?

        Now THAT’S an interesting question. I kind of wish you’d led with that- I think you would have gotten a lot better responses from other people. I would quibble over the 30 mins/year vs. 2 mins/month thing, but ultimately I suspect my answer here would be similar to the one you’re giving to Christianity: I don’t see any risk at all that racism is a more accurate view of reality than non-racism, and the risk of getting a false positive far outweighs the risk of believing !racism instead of racism. I don’t think the same applies to religion- but I suspect that you do.

        Anyway, when was the last time you did this with Scientology? Or creationism? Or whatever?

        Quite recently, actually. Not scientology specifically, but Creationism, Catholicism, and Buddhism. That’s probably more a function of my present circumstances than my ideology though.

        What?! How in the name of all that is fuzzy in this world are “God wants a personal relationship with you” or “God can save you from your sins” experientially verifiable?

        Same way any other relationship is. Don’t get me wrong, I think they fail verification. But they only fail once you try them (how could you possibly falsify someone else having a personal relationship with an invisible, intangible being?)

        Okay – so, just for the sake of argument, give me an example. Because, again, this is not how thought experiments work in scientific methodology: there, thought experiments are not used to arrive at conclusions.

        Sure- Schrodinger’s cat is the first one that jumps to mind. But looking around wikipedia’s thought experiment page there’s lot’s of ways legitimate science uses thought experiments. You’re correct that they’re not used as proofs about reality, but they can absolutely be used identify flaws in theories (like, say, Christianity)

        On what basis? What’s the series of steps that gets you from A to B?

        Because if there is a God, then he either wants to be known or he doesn’t. If he wants to be known, then it is much more likely that one of the religions that has a large number of followers would be true than a religion that has a very small number of followers.

        More specifically: I assign a non-zero probability for God being real. I assign a fairly high probability for that hypothetical God wanting to be known (if he doesn’t, then the universe is basically indistinguishable from the hypothetical universe with no God anyway). I assign a fairly high probability for that hypothetical God that wants to be known to be the one of the few major religions in the world. Partly this is pragmatism- I don’t have enough time in my life to go through every religion- and partly it’s natural selection. Whichever religion is actually right about God should conform closer to reality than any other religion, and should therefore accrue more followers.

      • http://rustbeltphilosophy.blogspot.com Eli

        “Sure, there are cases where the other side isn’t willing to be calm and rational, and you have to decide the optimal way to respond (which may or may not include aggressive behavior), but I’m saying that playing the role of the guy not willing to be calm and rational from the get go is the wrong way to do it- by which I mean it’s a sub-optimal strategy for getting what you want.”
        Okay, but I’m wasn’t that guy from the get-go and I don’t think that very many people are. Just because I’m that way when you first see me doesn’t mean that I’ve been that way all along.

        “I think this is a bad analogy to what you’re doing- what you’re doing seems more akin to only listening to people from the Classical school of chess, while ignoring anything people from the Russian Dynamism school have to say (yes, I just googled those. I have no idea what they mean). There are people who are VERY serious about philosophy, who have dedicated their whole lives to it, who you’re disqualifying because they’re coming from a different school from you.”
        Like who? And, remember, I’m not the one who’s disqualifying them: I’m willing to have a conversation with just about anyone, I just won’t pursue conversations with just about everyone. There’s a difference.

        “But judging purely from the content of the surrounding comment threads…”
        Sampling error…

        “As for me explaining away my participation: as I said before, your approach has stopped me from interacting with you in the past, and I very much expect it to stop me from interacting with you in the future. That’s empirical evidence too.”
        Okay, first of all, you’ve had – what – ten specific opportunities to respond to something I’ve said, attempting to err on the side of a large number? I mean, aside from times when you could’ve chosen to seek me out of your own volition. So I’m one out of ten with you? That’s really not that bad. And, again, you are still only a sample size of one; I don’t know what kind of rigorous empiricist generalizes from what is, in essence, a single data point.

        “My theory says you will get substantially fewer useful conversations out of your model than I will get out of mine, and I haven’t seen anything in this comment thread to dissuade me from that.”
        Ah – but see, now you’re equivocating on “useful.” You’re not as good a reasoner as I am – sorry! – so what’s useful for you is not necessarily useful for me. Don’t compare apples to oranges.

        “Based on my previous reading of the comments here, I would estimate your success rate ((people who take the time try try and have a discussion with you)/(people who just comment to tell you you’re an ass)) at around 1/15″
        Yeah, see? That’s really not that terrible, especially since your earlier estimation (allegation?) went all the way up to 1/50.

        “I would quibble over the 30 mins/year vs. 2 mins/month thing”
        Whoa whoa whoa – why? Hasn’t your position been that your rational self (or whatever) will overcome all obstacles? Isn’t this just you agreeing with everything I’ve been saying: that LARPing (which could not possibly happen in just 2 minutes per day!) is not reasoning and that, moreover, it will screw with your ability to reason if you really dive into it?

        “I suspect my answer here would be similar to the one you’re giving to Christianity: I don’t see any risk at all that racism is a more accurate view of reality than non-racism, and the risk of getting a false positive far outweighs the risk of believing !racism instead of racism. I don’t think the same applies to religion- but I suspect that you do.”
        Ta dah! So your position isn’t any more principled than mine is, really – you just pick different places to deprioritize your rationalist principles, is all. Even better, you now seem to be in a position to admit something that I’ve been saying for a few months now: we can’t figure out what to reason about just by asking how to reason. In practice – i.e., as a matter of applied rationality – we must take some things to be more or less beyond the pale. Mostly I think that people would take your approach and claim this exemption only for broadly moral issues. But I see no reason why it shouldn’t apply to other things as well (not least of all because we all, tacitly or otherwise, must apply it to other things as well).

        “Quite recently, actually. Not scientology specifically, but Creationism, Catholicism, and Buddhism. That’s probably more a function of my present circumstances than my ideology though.”
        Again, this is exactly where I was going with the question: it’s all well and good to say that your theory of rationality justifies doing X or Y, but in practice doing X or Y is more a function of circumstance than thoroughgoing intention. This ties in rather nicely with the previous point – and, I’m guessing before I even read it, the thing at the end of your comment.

        “Same way any other relationship is.”
        Er, no? Your claim was that “God wants to have a relationship with you” is a testable claim, not that “God has a relationship with you” is testable.

        “Sure- Schrodinger’s cat is the first one that jumps to mind. But looking around wikipedia’s thought experiment page there’s lot’s of ways legitimate science uses thought experiments. You’re correct that they’re not used as proofs about reality, but they can absolutely be used identify flaws in theories.”
        How can you say both of those things at once? If it’s not a proof, it hasn’t identified anything. They suggest flaws, maybe, but they don’t identify them; this is why Schroediner’s cat was originally meant to be a reductio ad absurdum but has been embraced as a reasonable image for understanding the idea in question.

        “I assign a non-zero probability for God being real. I assign a fairly high probability for that hypothetical God wanting to be known”
        Okay, Mr. Bayesian, run the calculation for me. What are each of these numbers, and, given those numbers, what’s your subjective conditional probability that god exists and wants to be known? I want to see if you’re applying the same level of credulity to everything with that score.

        “(if he doesn’t, then the universe is basically indistinguishable from the hypothetical universe with no God anyway)”
        But that’s not a Bayesian consideration, is it? You don’t just get to say, “Well, I might as well investigate this belief over here because the alternative is boring.”

        “Partly this is pragmatism- I don’t have enough time in my life to go through every religion…”
        Again, it’s not just pragmatism, but whatever – more importantly, the ones you’ve chosen aren’t necessarily the ones that you should choose. I mean, for crying out loud – Buddhism is basically an atheistic religion, yet you’re using in your reasoning about how God would want to be known? How does that make sense? Worse yet, history has shown that religious membership changes drastically over time; Christianity, to name a most excellent example of this, didn’t even exist a few thousand years ago. So what makes you think that God would want to be known now – i.e., at this particular moment in history? Given your stated premises, you should be trying to extrapolate the future course of religious history and seek guidance from whichever religion(s) thrive the most over time (or in the era of the largest human population, or in the era of greatest human knowledge, or something). But you haven’t done that, have you? Instead, you’ve allowed your circumstances to dictate your “reasoning.”

        “…and partly it’s natural selection.”Whichever religion is actually right about God should conform closer to reality than any other religion, and should therefore accrue more followers.”
        Suuuuure – cause that which conforms more closely to reality is that which accrues more followers. Riiiiiight.

  • Pingback: Get in the Game


CLOSE | X

HIDE | X