On some things that are not each other

Scott Alexander has a post up responding to Arthur Chu. Much of the post gives the impression that the argument between Scott and Arthur was mainly about Scott’s post on false rape accusations, whereas I think it’s pretty clear that Arthur has a much more general beef with Scott and the LessWrong community in general. In fact, when I first saw the post, my initial reaction was confusion and going to edit my own post endorsing Arthur to add a note saying Scott and I appear to have very different understandings of what Arthur said. But then I removed it, because maybe Scott was just choosing to focus on the parts of Arthur’s position he most disagreed with.

Either way, though, some parts of the Scott’s reply strike me as unfair to Arthur. It begins by claiming Arthur “criticizes me for my bold and controversial suggestion that maybe people should try to tell slightly fewer blatant hurtful lies.” I can’t see where Arthur actually said that. There’s also the part where Scott said, “for the love of God, if you like bullets so much, stop using them as a metaphor for ‘spreading false statistics’ and go buy a gun.” In the comments, Arthur rightly complained about this:

Equating my saying that sometimes you have to be an asshole to people who are assholes with saying that I want to buy a gun and shoot everybody is a crap argument. (For the record, I think the difference between when you should use words, including nasty words, and using armed resistance is a quantitative matter of degree of threat and not some absolute proscription — I’m not an absolute pacifist and most people aren’t.)

To which Scott replied:

I agree that you do not want to do this, I’m just saying you can’t not want to do this consistently.

…which strikes me as really odd. When I see these kinds of inconsistency charges, I’m always tempted to get snarky with formal logic: “How can I accept p while rejecting q, you ask? Easy: I assert ‘p & ~q’. Which I can do, because p is not the same thing as q, and the Law of Noncontradiction only prevents me from simultaneously asserting and denying the same proposition, such as if I were to assert ‘p & ~p’.”

This seems reflective of a broader problem within the LessWrong community – people who have no trouble parsing fine distinctions in philosophy or politics begin applying rigid false dichotomies when it comes to community norms or movement tactics. On LessWrong, I’ve encountered people who think you can neatly divide the world into people who are “defectors” and those who aren’t, or who equate a gay kid lying to homophobic parents who he’s financially dependent on with scamming people out of money.

The part of Scott’s post I found most worrisome, though, was this:

My natural instinct is to give some of the reasons why I think Arthur is wrong, starting with the history of the “noble lie” concept and moving on to some examples of why it didn’t work very well, and why it might not be expected not to work so well in the future.

But in a way, that would be assuming the conclusion. I wouldn’t be showing respect for Arthur’s arguments. I wouldn’t be going halfway to meet them on their own terms.

The respectful way to rebut Arthur’s argument would be to spread malicious lies about Arthur to a couple of media outlets, fan the flames, and wait for them to destroy his reputation.

Then if the stress ends up bursting an aneurysm in his brain, I can dance on his grave, singing:

♪ ♬ I won this debate in a very effective manner. Now you can’t argue in favor of nasty debate tactics any more ♬ ♪

(I won’t get into particular strategies for exactly how the damage might be done, because the ease with which my brain comes up with them sort of scares me, and I don’t want to get a reputation as the sort of person who can easily generate plans to turn Hufflepuff bones into weapons. But come on. He just became nationally famous for winning lots of money by being slightly cleverer than everyone else. It wouldn’t exactly take a MsScribe-level intellect to convince the media that destroying him would be a mildly amusing use of their time.)

I am not going to do that, but if I did it’s unclear to me how Arthur could object. I mean, he thinks that sexism is detrimental to society, so spreading lies and destroying people is justified in order to stop it. I think that discourse based on mud-slinging and falsehoods is detrimental to society. Therefore…

Note that the “MsScribe” link is to a post by Scott telling the story of a figure in Harry Potter fandom who was infamous for, among other things, smearing an opposing faction of Harry Potter fandom as bigots. This is followed by Scott claiming MsScribe was a total amateur compared to some of the stuff he and his friends got up to before he went to medical school. Which lends some plausibility to Scott’s claim that he could totally destroy Arthur Chu’s reputation if he wanted to.

Now, probably Scott means it when he says he isn’t going to do that… but after reading the above quote, when I ask myself who sounds more likely to try to destroy someone’s reputation with malicious lies, Arthur (who’s been found guilty of endorsing metaphorical “war and fire”) or Scott (who talks about how he could totally destroy someone’s reputation with malicious lies if he wanted to and they’d have no grounds to object), my answer is “Scott”. If I suddenly start hearing nasty rumors about Arthur or any of Scott’s other critics, I’ll now be more likely to take those rumors with a grain of salt.

And there may lie a danger of extreme black-and-white thinking that hadn’t occurred to me before: it creates a tempting rationalization for all kinds of nasty behavior. If you divide the world into people who are perfectly honest all the time and act as rationalist purists in debates, and everyone who doesn’t follow that extreme code, it becomes tempting to think violations of the extreme code should be punished with any dirty tricks you can think up. Because it’s all the same, right? Well no, it isn’t—as anyone who rejects such black-and-white thinking can see.

  • Brutus

    Chu said:
    >That post [the one debunking false rape statistics]
    is exactly my problem with Scott. He seems to honestly think that it’s a
    worthwhile use of his time, energy and mental effort to download evil
    people’s evil worldviews into his mind and try to analytically debate
    them with statistics and cost-benefit analyses.

    From that, Scott concludes that Chu’s problem is that Scott is spending time trying to fight lies rather than using the most effective means to fight people who have different conclusions than he does.

    And the reason why I’m trying to communicate with you rather than destroy you is because you are not my enemy, nor Scott, nor Chu, nor the (possibly partly counterfactual) lying feminist who made up numbers with the intent of convincing people who don’t fact-check of whatever it is ze was trying to prove.

    My enemy, which I suspect might also be Scott’s enemy, is the meme that it is acceptable to use lies and blatant misrepresentation to make someone agree with you. It is not acceptable for me to lie and blatantly misrepresent facts to convince you that you also have that meme as an enemy. I won’t pretend to be a total pacifist and claim that I cannot conceive of a situation where I wouldn’t use such a rhetorical technique, because I can trivially do so. I also have never personally experienced such a situation.

  • Robby Bensinger

    Chris, I think you’re actually being less charitable to Scott than Scott was to Arthur. You’re accusing Scott of black-and-white thinking because Scott wants Arthur to endorse a faux-deontic rule like ‘don’t kill people even when your utilitarian calculus tells you to’. But you don’t need to think in black-and-white terms to think a faux-deontic rule like that is a good idea. You just need to think that humans are too bad at act-utilitarianism to be trusted to always follow its prescriptions in practice.

    There may be a continuum of people who follow that faux-deontic rule better or worse, even if it’s a generally good rule and even if Arthur’s words (and his deeds, which included using genuinely threatening and bullying language against Scott) suggest he under-utilizes it. ‘There are shades of grey’ does not entail ‘always follow the prescriptions of causal-decision-theoretic act-utilitarianism’, and it’s the latter that’s the (philosophical) target of Scott’s criticism.

    Your ‘you’re a black-and-white thinker, so you plausibly think it’s OK to punish people you dislike with dirty tricks’ criticism of Scott I think is particularly weird, and I say that as someone who remains sympathetic to some of Arthur’s points. You’re promoting here, in diluted form, Scott’s core thesis: ‘We shouldn’t attack bad people no matter how bad we think they are.’ That’s what his blog post is •about•. You’re using the paragraphs he wrote as a reductio of ‘be mean to the people you classify as Bad Guys’ to support the thesis that Scott thinks we should be mean to people we classify as Bad Guys. This in the midst of a whole post that’s an ode to ‘be unremittingly nice to people, no matter how much they suck’. That’s terrifically odd!

    Maybe Scott doesn’t live up to that ideal anywhere near as well as he ought to. A good argument for hypocrisy could be made here! Seriously, make that point! But ‘you aren’t 100% living up to the ideal of X-ism’ is quite a distance from ‘I find your endorsement of anti-X-ism deeply problematic’. It’s silly to take the fervor of Scott’s endorsement of the ‘let’s mostly be pacifists’ thesis as strong Bayesian evidence that he’s a violent and dangerous person who thinks it’s OK to violate faux-deontic norms to hurt Bad People…

    On your other criticism: Arthur’s response to “go buy a gun” is pretty weak. All Arthur says is ‘that’s a crap argument’ (why?) and, effectively, ‘my utilitarian calculus hasn’t yet told me that my comparative advantage is to shoot my enemies’ (oh dear). I think Scott’s “go buy a gun” point is good, as a reductio of the brand of extremely strict utilitarianism-in-practice Arthur appears to endorse.

    You object that Scott hasn’t shown a logical contradiction. But that’s not really a fair standard in an informal discussion, and words like ‘inconsistent’ and ‘unsound’ are often used in contexts outside of deductive logic.

    In any case, it’s not hard to see what the contradiction would be, if we formalized Scott’s point: ‘I should always do whatever I expect to produce the better outcome, even if I wouldn’t want others to do the same. There exists at least one person A such that my killing A produces the better outcome. But I shouldn’t kill A.’

    Or, in logic-speak, where ‘Kαβ’ is α killing β, ‘i’ is me, and ‘u’ is Joe Shmoe:

    1. ∀A∀x ( better(Aix) ⇒ ( obligatory(Aix) even if obligatory(¬Aux) ) )
    2. ∃y ( better(Kiy) )
    3. ¬∃z ( obligatory(Kiz) )

    That’s a formally inconsistent triad, and it’s the triad Scott is claiming Arthur is committed to. Arthur’s response is presumably that he thinks claim 2 is false, which Scott thinks is wishful thinking; Scott thinks Arthur should instead abandon claim 1, because we shouldn’t normally kill people even when we calculate that it will have a net positive impact.

  • http://skepticsplay.blogspot.com/ trivialknot

    At one point, Scott Alexander quotes Arthur Chu as saying:

    That post [the one debunking false rape statistics]
    is exactly my problem with Scott. He seems to honestly think that it’s a
    worthwhile use of his time, energy and mental effort to download evil
    people’s evil worldviews into his mind and try to analytically debate
    them with statistics and cost-benefit analyses.

    It’s pretty clear in context that “that post” is not the one debunking false rape statistics, but rather “Social Justice for the Highly Demanding of Rigor“. Arthur’s comment was responding to someone who had linked to it.

    So it’s true that Scott thinks Arthur mainly has a beef with the post on false rape statistics, when Arthur’s complaint is actually more general.

    • Brutus

      Object-level: If Alexander is honestly mistaken about Chu’s intentions and you are correct, Chu has a problem with effective communications with ‘evil people’, since that is exactly what the resources spent in getting their worldviews into our minds is doing.

      Meta-Level: If Alexander is not mistaken about Chu’s intentions and is intentionally misrepresenting Chu in an effort to discredit him, then Alexander is using Chu’s suggested methodology. If that seems evil, it’s because it would be- but Chu cannot complain about that technique, because Chu has advocated using it in every circumstance.

      There were three linked posts in the lead-in to that statement:

      http://slatestarcodex.com/2013/04/20/social-justice-for-the-highly-demanding-of-rigor/

      http://slatestarcodex.com/2013/05/18/against-bravery-debates/

      http://slatestarcodex.com/2013/08/29/fake-euthanasia-statistics/

      None of them appear to be the one about manufacturing rape statistics, but one of them is about manufacturing murder statistics. On what basis do you believe the first one to be unambiguously the one that Chu referenced?

      Where is your dog in this fight? Mine is deflecting what I observe is an attack on the policy of promoting nonaggression.

      • http://skepticsplay.blogspot.com/ trivialknot

        You know, your LW terminology might be super precise, but is not actually an effective way to communicate with people who don’t want to look up the relevant LW post and read a thousand words? See, I don’t actually care about your opinion so much that I want to bother.

        I felt the social justice link was the most likely referent (because it’s the one about “reactionaries”), but you are free to hypothesize differently based on what he said.

        I didn’t see nonaggression as the issue at hand. The issues were: when “unfair” rhetorical tactics are allowable, whether LW-style thought is an effective way to create beneficial social change, and whether we should engage with groups like neo-reactionaries and MRAs. I think these are interesting issues that I am undecided on.

        • Brutus

          Oddly enough, I think that this is the relevant sequence. I assumed that anyone following would have at least enough background to know the basic terminology.

          And the issue at hand for me is that telling lies (and having them believed) to convince others to implement policies that I favor is a net negative. The short reason why is that my beliefs are subject to change when new evidence appears. If people agree with me on the basis of truth, they will probably react to new evidence in the same way that I will, and remain my allies. If, however, they support the same policies I support for reasons based on falsehood, then they will probably not support the policies I support after significant new evidence arises

          • http://skepticsplay.blogspot.com/ trivialknot

            No see I didn’t know what “object level” vs “meta level” meant. If you think yet another sequence of thousands of words is also relevant, well thanks.

            It’s not obvious to me that Arthur was advocating telling lies any more than he was advocating bullets. Rather, he seems to think that imprecision is acceptable.

            I’m inclined to agree, at least to the extent that precision isn’t worth the utter inaccessibility of the LW community. I’m a blogger who writes in too many words myself, but LW has always struck me as particularly excessive in that respect. I appreciate the value of extremely verbose analysis, but I would not say that the value of it is effective social change. And when it’s in-depth analysis of “neo-reactionaries”? Well, that’s your prerogative.

            I don’t know if my complaints are similar or dissimilar to Arthur’s. But I think if Scott had been more charitable to Arthur, he would have hit upon issues that were more ambiguous and more interesting.

          • Brutus

            I think Chu explicitly advocated using both metaphorical and literal bullets!

            > I am in favor of my side using bullets as best they can to destroy the enemy’s ability to use bullets.

          • http://skepticsplay.blogspot.com/ trivialknot

            I think this argument has ceased being interesting. Thanks for your time.

          • Brutus

            Sorry that the discussion you entered wasn’t the discussion that you found interesting. I’ll be around when a subject comes up that you’re interested in, such as whether engaging with people who disagree with you as equals is effective.

  • Luke Breuer

    On LessWrong, I’ve encountered people who think you can neatly divide the world into people who are “defectors” and those who aren’t, or who equate a gay kid lying to homophobic parents who he’s financially dependent on with scamming people out of money.

    Ahh, fundamentalism among rational people!

    • Brutus

      Neither of those positions is particularly fundamentalist. The point of the former relies on a slightly esoteric definition of “defector”, and the point of the latter is that a general prohibition of lying has bad results, as does a general permissiveness of lying; in other words, “Misleading someone in order to get money” is not universally wrong.

      • Luke Breuer

        You are ignoring the “neatly divide the world [into us vs. them]“. I like to say that if anyone claims that everyone is either for him or against him, that person is claiming to either be (a) Jesus, or (b) Satan. Everyone else has mixed motives, not pure motives. And thus, my judgment of good vs. evil (or good vs. bad if you’re allergic to the word ‘evil’) will not be perfect. If I try to to separate the world into those I judge to be good and those I judge to be evil, I will create a shitty world. A fundamentalist world. An us vs. them world.

        • Brutus

          The context of “Defector” that makes sense there is the Prisoner’s Dilemma. There are two categories that include almost everyone: People who choose the matrix [3,0; 1,1], and people who choose the matrix [2,2; 0,3]. The former are “defectors”, while the latter are “cooperators”.

          • Luke Breuer

            Ok? This still views people as a 1 or a 0.

          • Brutus

            “Almost everyone”. There remain people who aren’t deterministic on such choices, but within the game there is no third option.

  • Patrick

    Scott’s moral reasoning is atrocious. Imagine Germany announcing that they intend to invade Poland because they want more land. England says it will retaliate because they believe it is wrong to invade a country for that reason. Italy responds that England has no right to say that because obviously they think it is ok to resolve national differences by invasion, as evidenced by their threat.

    That’s obviously a silly response. Doing X in order to achieve Y is not the same act as doing X to achieve Z.

    What would be a valid response is for Germany to say that they will not accept any arguments against their invasion plans that are based on the inherent wrongness if invading countries. And they might note that since they do not intend to abandon their goal of getting more land, England now has fewer neutral arguments available to it that Germany might accept.

    But whether England should care is, at best, a factually and contextually specific question.

    • alexander stanislaw

      I’m not sure what invading a country maps onto in this debate. Is it lying and silencing your opponents? And in that case would you say that it is okay to lie and silence people if your side is right and your opponents are horrible?

      • Patrick

        I am saying that Scott’s argument is like telling someone who supports the death penalty that the “respectful” way to respond to their position would be to shoot them in the head. It relies on insisting on the immateriality of the one moral question that his opponent believes to be most worthwhile, namely, when and why one pulls a trigger. To the extent that it is even worth calling it an argument, it assumes its conclusion within its premises.

        If you don’t like that one, try “Scott’s argument is like telling someone who supported the end of colonialism that the respectful way to respond to their position would be to starve yourself to death unless they reinstitute the brutal oppression of the Indian subcontinent.”
        This isn’t reasoning. It isn’t anything. It is, at most, a sort of terrible effort at recreating Kantianism through insisting on higher order levels of analysis of utilitarian questions where most of the relevant utilitarian distinctions can no longer be seen. Missing the trees for the sake of an arm-chair theorized forest, as it were.
        Its… kind of what Scott points out that the neoreactionaries do when they discuss government types, actually… just noticed that.

        • alexander stanislaw

          Sorry for being difficult but you didn’t answer my question.

          In your analogous cases the reasoning is something like “its only okay to shoot someone if they are a horrific criminal”, or “its only okay to use a hunger strike if your cause is correct and sufficiently important”. So I’d to know if you would endorse something like “Its only okay to spread lies and use censorship if your opponent is a horrible far-rightest, or have some other extreme political view, and your cause is just”.

          • Patrick

            That’s because your question is off topic. I was trying to explain why.
            John: “X is morally justified in context Y. I believe that we are in context Y.”
            Jane: “I believe X is never morally justified. But rather than discussing that, I think the respectful response would be to just do X to you over and over. LETS SEE HOW YOU LIKE IT.”
            Jane is an idiot. It doesn’t matter whether X is justified. It doesn’t even matter what X is. Maybe John believes that taxation is morally justified in certain contexts, and Jane is retorting that if he thinks that she’ll take all his money at gunpoint. Maybe John believes that REFUSING to pay taxes is sometimes justified, and Jane is retorting that if he thinks that maybe she should refuse to pay taxes that support something John likes. What X is doesn’t matter.
            The point is that this is not an argument. John voiced conditional support for X when certain facts he believes to be morally salient are true. Whether those facts are true is a very important issue for John. Jane doesn’t care about those facts, and is trying to use that to sort of weirdly emotionally blackmail John. She’s arguing that John wouldn’t like it if people supported X when the purportedly morally salient facts were NOT true, so John is somehow foolish for not adopting a blanket objection to X. Well, that’s dumb. There isn’t much else to say about this argument except that: that it is dumb. Ethics doesn’t operate that way, ethics CAN’T operate that way, this sort of reasoning is a waste of everyone’s time.

          • alexander stanislaw

            It’s very easy to rephrase someone’s argument in a logically fallacious way and then say “Aha! YOU’VE MADE A FALLACY, YOU’RE SUCH A WASTE OF TIME”. But in doing this you miss out on nuances. Ed Feser wrote an excellent post on this here.

            Back on topic, I don’t see how my question is off topic. You presumably think that lying and using censorship is conditionally moral. There is no deontic rule against using them (and since neither I nor Scott are deontologists we of course agree). I’d like to know what you think the conditional is. Clearly saving the world is a good enough reason to do them. Saving a dozen lives is good enough.

            But where do we stop? What about using them because your opponents are dangerous? Probably still okay, but it starts to get hairy. Christians think that atheists are dangerous – what could be more dangerous than someone who might turn good Christians against Jesus and damn them to hell for all eternity! What about using those weapons because you really really hate your opponents? I draw the line before this.

          • Patrick

            I don’t have to have a bright line rule to point out that Scott’s reasoning is poor. Nor does pointing out that Scott’s reasoning is poor commit me to a particular position with respect to other aspects of the conversation. Ironically, both Scott and Less Wrong have great articles on that point.

            I have no interest in your efforts at changing the topic, particularly given that you are demanding a bright line rule of me while offering nothing of the sort of your own. Bright line rules are hard. Insisting that I provide one while you do not is insisting that I handicap myself before the conversation even starts.

            And I fail to see why I should do that given that, with respect to the actual point I’ve been making this entire time, your latest post entails some clear concessions of the issue. You have adopted Johns position in my post above. I need little more.

  • ThePrussian

    Er, getting it wrong. Scott was pointing out that _on the premises advanced by Arthur_, he would be justified in doing that. Because those are the logical consequences of Arthur’s comments.

  • f_galton

    Is he the Arthur Chu on Jeopardy?

    • alexander stanislaw

      Yes indeed.

  • alexander stanislaw

    Edit: Sorry – from your other post, I take it you are just in favor of ignoring the reactionaries, rather than trying to silence them/ use whatever means necessary to destroy them.

  • Elissa Fleming

    If I found the actual topic of this conversation interesting or thought there were non-obvious things that needed saying about it, I would probably be annoyed with everyone involved for being consistently uncharitable and talking past each other. As it is, it’s fortunate for my drama-seeking drive that you all seem to be spoiling for a fight enough that you keep on engaging anyway.

  • Pingback: wz

  • Pingback: wz

  • Pingback: Google


CLOSE | X

HIDE | X