Get in the Game

Get in the Game October 6, 2012

Eli of Rust Belt Philosophy has expanded his critique of the LARPing exercise, and I found it really helpful. The game is meant to help you defend against not taking an idea very seriously or not working out its consequences or predictions, and now I understand why Eli doesn’t think that properly falls into the category of good epistemology. I’ve excerpted below, but you should nip over and read the whole thing.

On its face, this sort of thing [ugh fields] looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature. To wit: although “ugh fields” can prevent us from reasoning well, they aren’t reasoning failures in and of themselves. Listing them alongside cognitive biases is therefore a little misleading: the fundamental attribution error (e.g.) is a bad mode of reasoning, but “ugh fields” only contribute to bad modes of reasoning. As such, the concept of an ugh field is something suited to some kind of meta-epistemological theory. Or, to say that in plain English, it’s something that’s relevant to producing a good reasoning environment, reviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself. As ever, we can even see this kind of distinction in basketball: maintaining a healthy diet is not a basketball value in and of itself, but it does have a sort of meta-value for basketball players. When you’re playing against someone, you don’t get any bonus points for eating well and they don’t lose any points for eating badly – but it definitely is prudent to eat well if you plan on playing basketball, especially if you plan on doing so for a living.

Appropriately to this sort of problem, the proposed solutions to “ugh fields” are not good reasoning methods but are, instead, solutions that are intended to lead to good reasoning later. Those solutions include self-monitoring of symptoms, positive visualization, the use of affirmations – which, as you’ll hopefully recognize, are therapeutic solutions and not rational or philosophical ones. (I mean this literally: those are all very common techniques in the field of psychotherapy.) They may get you to the point where you can begin to reason or train yourself in reasoning, just as eating well may get you to the point where you can practice well or get the most out of your practices. But you can’t settle for disposing of your “ugh fields” any more than Tim Duncan can settle for eating only whole-wheat bread. Just like you can’t develop a jump shot with a fork and a knife, you can’t develop or critique a theory with positive visualization and affirmations…

When we approach something from a statistical angle, we usually have to admit – at least tacitly – that we’re doing so only because we don’t have enough information to approach it any other way. Gambling is a perfect example of this: if youknew which cards your opponent held you wouldn’t have to play the odds, but you don’t know that and are therefore forced to use statistics to help you out. Applying this to the subject at hand, all we can say about “ugh fields” is that theysometimes act as precursors to bad reasoning. This is a start, but it’s not much of one – we don’t know how often they do this (it could even happen only in a minority of cases), in which circumstances they do this, why they sometimes fail to do this, and so on. As such, we’re in a rather awkward position when it comes to “ugh fields”: when we notice that one is operating we’d like to be able to say with certainty that there’s bad reasoning going on, but we can’t. Apparently good reasoning that happens within an “ugh field” is actually bad sometimes, yes, but there are other times when it really is good. Despite this, lots and lots of LessWrong-ites apparently feel like they don’t even have to check the reasoning itself but can, instead, determine whether an argument is good or bad simply by examining the meta-reasoning. Instead of making substantive (i.e., reason-based, epistemological) objections, these people tend to ask questions like: was this argument charitabledoes it seem like the author was angry while writing itis the author behaving pro-socially or anti-socially, and so on. Those questions aren’t irrelevant – they’re only meta-epistemological, but they are still meta-epistemological – but they are, in the final accounting, beside the point. Especially because we don’t have specific probability distributions to use in our statistical meta-epistemological analyses, it’s naive to fixate on the meta-argument to the total exclusion of the argument itself. Statistics is the right approach to use here, but statistics just doesn’t work this way.

I agree with Eli that if people focus on what he terms meta-epistemology to the point where it eclipses epistemology, it’s a problem.  I think of these techniques and warning signs as being like glasses.  You want to make sure you’ve got them on, to correct for some of your built in flaws, but then you’ve got to look.


Browse Our Archives