Get in the Game

Eli of Rust Belt Philosophy has expanded his critique of the LARPing exercise, and I found it really helpful. The game is meant to help you defend against not taking an idea very seriously or not working out its consequences or predictions, and now I understand why Eli doesn’t think that properly falls into the category of good epistemology. I’ve excerpted below, but you should nip over and read the whole thing.

On its face, this sort of thing [ugh fields] looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature. To wit: although “ugh fields” can prevent us from reasoning well, they aren’t reasoning failures in and of themselves. Listing them alongside cognitive biases is therefore a little misleading: the fundamental attribution error (e.g.) is a bad mode of reasoning, but “ugh fields” only contribute to bad modes of reasoning. As such, the concept of an ugh field is something suited to some kind of meta-epistemological theory. Or, to say that in plain English, it’s something that’s relevant to producing a good reasoning environment, reviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself. As ever, we can even see this kind of distinction in basketball: maintaining a healthy diet is not a basketball value in and of itself, but it does have a sort of meta-value for basketball players. When you’re playing against someone, you don’t get any bonus points for eating well and they don’t lose any points for eating badly – but it definitely is prudent to eat well if you plan on playing basketball, especially if you plan on doing so for a living.

Appropriately to this sort of problem, the proposed solutions to “ugh fields” are not good reasoning methods but are, instead, solutions that are intended to lead to good reasoning later. Those solutions include self-monitoring of symptoms, positive visualization, the use of affirmations – which, as you’ll hopefully recognize, are therapeutic solutions and not rational or philosophical ones. (I mean this literally: those are all very common techniques in the field of psychotherapy.) They may get you to the point where you can begin to reason or train yourself in reasoning, just as eating well may get you to the point where you can practice well or get the most out of your practices. But you can’t settle for disposing of your “ugh fields” any more than Tim Duncan can settle for eating only whole-wheat bread. Just like you can’t develop a jump shot with a fork and a knife, you can’t develop or critique a theory with positive visualization and affirmations…

When we approach something from a statistical angle, we usually have to admit – at least tacitly – that we’re doing so only because we don’t have enough information to approach it any other way. Gambling is a perfect example of this: if youknew which cards your opponent held you wouldn’t have to play the odds, but you don’t know that and are therefore forced to use statistics to help you out. Applying this to the subject at hand, all we can say about “ugh fields” is that theysometimes act as precursors to bad reasoning. This is a start, but it’s not much of one – we don’t know how often they do this (it could even happen only in a minority of cases), in which circumstances they do this, why they sometimes fail to do this, and so on. As such, we’re in a rather awkward position when it comes to “ugh fields”: when we notice that one is operating we’d like to be able to say with certainty that there’s bad reasoning going on, but we can’t. Apparently good reasoning that happens within an “ugh field” is actually bad sometimes, yes, but there are other times when it really is good. Despite this, lots and lots of LessWrong-ites apparently feel like they don’t even have to check the reasoning itself but can, instead, determine whether an argument is good or bad simply by examining the meta-reasoning. Instead of making substantive (i.e., reason-based, epistemological) objections, these people tend to ask questions like: was this argument charitabledoes it seem like the author was angry while writing itis the author behaving pro-socially or anti-socially, and so on. Those questions aren’t irrelevant – they’re only meta-epistemological, but they are still meta-epistemological - but they are, in the final accounting, beside the point. Especially because we don’t have specific probability distributions to use in our statistical meta-epistemological analyses, it’s naive to fixate on the meta-argument to the total exclusion of the argument itself. Statistics is the right approach to use here, but statistics just doesn’t work this way.

I agree with Eli that if people focus on what he terms meta-epistemology to the point where it eclipses epistemology, it’s a problem.  I think of these techniques and warning signs as being like glasses.  You want to make sure you’ve got them on, to correct for some of your built in flaws, but then you’ve got to look.

About Leah Libresco

Leah Anthony Libresco graduated from Yale in 2011. She works as an Editorial Assistant at The American Conservative by day, and by night writes for Patheos about theology, philosophy, and math at www.patheos.com/blogs/unequallyyoked. She was received into the Catholic Church in November 2012."

  • grok87

    I love the idea of ugh fields. Makes perfect sense. The constant refrain of the old testament is that “the fear of the Lord is the beginning of wisdom.” See today’s psalm for example
    http://www.usccb.org/bible/readings/100712.cfm
    “Blessed are you who fear the LORD, who walk in his ways!
    For you shall eat the fruit of your handiwork; blessed shall you be, and favored.”

    We spend so much of our time being afraid of the wrong things and hence we can’t think straight. For example being afraid of someone at work, or of not doing well on some test or evaluation or something. It’s good to be reminded that the “Fear of the Lord” is the only “right-oriented” fear. And if we have that orientation, that frame of mind, wisdom and blessings follow…
    cheers,
    grok

  • Aaron

    This may be too far into the rhetorical weeds, but I would love to see a definition of “reason” or “reasoning” that includes built-ins like ugh fields and other biological and psychological states. Where does “reason” start? Is it separable from the environment that engages in it? Does “reason” have a telos or is it just the name for a process that humans engage in? Is reason empirically validated by a formal proof? Should it be? Don’t we actually make emotional decisions first and then rationalize about them after with our air-tight hermetically sealed reason-chambers?

    “But you can’t settle for disposing of your “ugh fields” any more than Tim Duncan can settle for eating only whole-wheat bread. Just like you can’t develop a jump shot with a fork and a knife, you can’t develop or critique a theory with positive visualization and affirmations…”

    I don’t believe that Leah or anyone else is saying that you can, but you can’t develop a jump shot if you’ve just eaten 3 lbs of ice cream or shotgunned a gallon of Cristal either. There’s a huge amount of variance in what constitutes “enough good food to physically perform a task” but the variance doesn’t eliminate the requirement. Tim won’t even be able to make a free throw if he’s been starved for several days. So, do basketball coaches yell at their players to eat right? Yep. Why? Because functionally it’s inseparable from good form. A good basketball game depends on both a healthy body, healthy mind, and good form, and a basketball coach would define “good” as “contributing to scoring 30 points a game.”

    To use a different analogy, let’s replace “reasoning” with “drag-racing” (!!). I learn to drag-race by knowing how to use my vehicle to achieve maximum speed, but I can’t even get onto the track and move forward at all without some preconditions. I need to know where the brakes are, where the gas pedal is, and where the steering wheel is and how it should be rotated in order to turn the front wheels (you can call these ‘ugh fields’). To claim that general drag-racing technique and theory is independent of these things is false, because every theory will assume that the driver is familiar with them. It’s not stated in the theory because it’s implicit. Do you still need to know them? Yep.

    Computer Science doesn’t spend a lot of time talking about the electrical resistance of various materials and how they would effect processing. We know that some materials (like copper) are good for circuits and others (like ducks) are poor choices for circuits. A theory of computer science that said “yes, well, that materials science stuff isn’t important, that’s meta-engineering” is great when you’re carving up a syllabus for allocation of university resources but not super helpful to anyone who has to build a computer and use it to do useful stuff. And religion/philosophy is all about building a computer and using it to do useful stuff.

    • http://rustbeltphilosophy.blogspot.com Eli

      “I don’t believe that Leah or anyone else is saying that you can, but you can’t develop a jump shot if you’ve just eaten 3 lbs of ice cream or shotgunned a gallon of Cristal either.”

      Just for the record, Vin Baker was a professional basketball player for years despite being addicted to alcohol, and Lamar Odom apparently eats his own body weight in candy. (One of these players is better than the other, granted.) Of course there are limits, but the limits are often surprisingly broad compared to what we come up with when we attempt to guess them in the abstract.

      • Aaron

        Oh for sure, and perhaps I wasn’t broad enough with those examples…but the fact that there are limits is what I was getting at. Our physicality (which includes blood sugar levels and blood-alcohol content as well as the levels of dopamine or seratonin in our brains) has an awful lot to do with the kinds of minds we have.

        Put another way, if my epistemology for knowing what color an object is doesn’t include my colorblindness, then my epistemology is flawed. We might normally say that it’s a terrible epistemological method to ask the people around what color something is and privilege their answer when it conflicts with my own sensory data…but for a colorblind person that makes complete sense.

  • Alex Godofsky

    I still think y’all are both massively overcomplicating the issue:

    Shorter Leah: “when thinking about counterfactuals you should affect positive emotions towards them, not negative ones. This helps you avoid certain cognitive traps.”
    Shorter Aaron: “Leah’s suggestion doesn’t always work and may even be counterproductive sometimes.”

    • Alex Godofsky

      Eli, not Aaron, sorry.


CLOSE | X

HIDE | X