Philosophical incompetence as an existential risk

Note: the following is a revised version of a 15-minute lightning talk I wrote for the 2014 Effective Altruism Summit in Berkeley, California.

Hi. My name is Chris Hallquist. Once upon a time, I was a PhD student at the university of Notre Dame. Then I dropped out, wandered the earth for three years, and eventually came to the Bay Area to become a software engineer.

In my most cynical moments as a philosophy student, I thought, “gee, all these questions academic philosophers study don’t matter for anything, none of them have any idea what the right answers are anyway, and they don’t even have a good plan making progress on the issues.”

Let me give an example. In undergrad, I took a metaphysics seminar where, I kid you not, we spent a significant percentage of the semester debating whether if you went through the Star Transporter, it’s really you that comes out on the other end.

This was not because the professor was a big Trekkie. It turns out that professional philosophers spend a lot of time talking about the Star Trek transporter. When PhilPapers.org did their big survey of the opinions of professional philosophers, the transporter question was one of the questions they decided they needed to have on there, alongside questions like “Does God exist?”

And the philosophical literature on the transporter is huge, so it took a lot of time to cover even a portion of it in class. We went round and round talking about it as a class. One argument would make us think one thing, and then the next variation on the thought experiment and make us think the opposite. In the end, I didn’t know what to think, but I decided the issue must not be very important.

Okay, that’s my most cynical moments as a philosophy student. Now let me tell you about my most terrified moments. In my most terrified moments, I think all these academic philosophers don’t have any idea what they’re talking about, they don’t even have a good plan for making progress, and many of the questions are matters of life or death.

For example. The Star Trek transporter is a fantasy. But there’s good reason to think that in principle we could build an extremely accurate computer model of your brain, and other people would be able to talk to it as if it were you. This is called mind uploading or brain emulation.

And then the question is, is the computer model, upload, emulation, whatever you call it, really you? It turns out this question is really hard to answer, for the same reasons the Star Trek transporter question is hard to answer. Let me give you just a taste of the kind of problems you run into: in Star Trek, there are a couple episodes where a transporter accidentally makes a duplicate of someone. So which one is the “real” original person?

You get the same problem with mind uploading, because the whole point of software is it can be copied. If you think an upload of you is really you, what about when we make a copy? Surely they can’t both be numerically identical to you.

So one proposal is that what matters is not numerical identity, but whether the copies in some sense “survive” you. But many people don’t like that solution. I won’t go into greater detail, because the rabbit hole gets very deep very fast. The point is, it’s a hard question, and could literally be a matter of life or death—is uploading a path to immortality, or are you dead?

This “is it really you” question may not even be the worst question that would be raised by brain emulations. I can almost convince myself the “is it really you” question doesn’t matter. Maybe all that matters is happiness, and it doesn’t matter so much if the happy people existing at one time are the same as the happy people that existed in the past.

A lot of people don’t like that kind of solution either. But even if it works for personal identity, there are other questions like “can the emulation feel pain?” and “can it feel pleasure?”, which seem equally puzzling, and even harder to dismiss as not mattering.

Oh, and by the way, if we ever did develop mind uploading, we might end up without much of a choice as to whether to upload. Uploads might be a lot cheaper to run than the cost of keeping humans alive, so there could be intense economic pressure for everyone to upload. That’s without a superintelligent AI taking over the world and forcing everyone to upload.

Now: so we’ve got these crazy thought experiments. Philosophers like talking about them a lot. It might seem like a waste of time. But it turns out there’s reason to think advances in technology could effectively make these thought experiments a reality. I’ve only given you the transporter/brain emulation problem, because I’m trying to keep this short, but you could give others.

If you’ve ever heard the term “population ethics,” that’s another major source of big scary issues that are mostly theoretical now but which we might have to make decisions on in the future. In a world of brain emulations, because brain emulations could be copied, it might be very easy to increase the population to the point where most people are living at the subsistence level.

Robin Hanson, an economist who works on the economics of brain emulations, thinks that would actually be a fine outcome, but many people aren’t so sure. This problem leads to tricky questions philosophical questions such as total utilitarianism vs. average utilitarianism. See the Stanford Encyclopedia article on the repugnant conclusion if you want to learn more.

Now I’ve said professional philosophers don’t know what they’re talking about when it comes to these questions. And you might expect me to now tell you that we Effective Altruists can do better, or maybe that I can do better. But actually, as far as I can tell, no one knows what they’re talking about when it comes to these really hard philosophical questions.

Eliezer Yudkowsky has, a few times, alluded to thinking he has a solution to the puzzle of qualia which he hasn’t written up because he thinks his metaethics sequence was so badly misunderstood that anything he said about consciousness would be similarly misunderstood.

Personally, I think Eliezer is very smart and has good insights (I seem to be in the minority that liked the metaethics sequence). I actually think there might be a tiny chance he’s figured out qualia (something I wouldn’t grant to anyone else who made such a claim). But I also think these issues are way too hard to trust any one person to get it right all by themselves, and I wish Eliezer would discuss his philosophical ideas with as many other smart people as possible.

One thing I think often goes wrong in the EA movement here is that there will be one very smart person who takes it upon themselves to explain a philosophical issue to their fellow EAs, and that’s necessary work, because there’s very little good popular philosophy. And then this one very smart person will advocate for their view of the issue very persuasively.

In the EA context, maybe we’re talking about an argument for consequentialism. And then other EAs will assume this very smart person must be right, consequentialism must be true, because people haven’t gone and read the original academic publications by other very smart people arguing for other views.

In retrospect, spending five and a half years of my life studying philosophy may not have been the best use of my time. But it did give me a deep, gut level understanding of what it’s like to hear an extremely persuasive argument for one view immediately followed by an equally persuasive argument for the opposite view.

When Peter Thiel advocated for intellectual courage yesterday, that struck me as dangerously misguided. When you look around you and everyone else seems completely confused about an issue, the correct response is not to say “we need courage here,” the correct response is to be very afraid.

(Notice that the business analogy doesn’t work—it can make sense to bravely try a business idea even though you realize it probably won’t work, but you can’t bravely believe an idea even though you realize it’s probably wrong. And you can test a probably-wrong scientific hypothesis, but we don’t yet know how to run good tests of philosophical ideas.)

So what do we do about this? Well, what professional philosophers do right now is they spend a lot of time arguing about their opinions with other professional philosophers. This is probably actually better than nothing.

If you’re an amateur philosophy buff inclined to have strong philosophical opinions, I encourage you to do this, arguing not just with other people in the EA movement, or the rationalist movement, but argue with other smart, well-informed people who might have different biases than you do. At least it might reduce overconfidence.

Another thing to do is to try to look for questions that are in the vicinity of the philosophical questions you want to answer, but aren’t the exact same questions that philosophers have been stuck on for a long time.

Nick Bostrom, who’s a philosophy professor at Oxford and director of the Future of Humanity institute, seems to do this a lot. He just put a new book out called Superintelligence, and among other things talks about brain emulations and how having them might lead to bad outcomes, but if you read the book carefully you’ll notice he doesn’t try to frame a general theory of, say, under what circumstances emulations, or other AIs, would and would not be conscious.

This is OK, because the focus of Superintelligence is elsewhere, on issues academic philosophers don’t normally discuss. Bostrom says lots of smart stuff on these topics. So maybe we can make progress doing the Nick Bostrom thing, finding questions philosophers haven’t gotten stuck on yet. On the other hand, it doesn’t solve our original questions, which still look important to solve.

Bostrom’s answer is that he hopes we can build an AI that will be able to solve all our philosophical problems for us. He argues we shouldn’t try to program an AI with all the answers to all the philosophical questions that would matter to it. Rather, we should build an AI that’s “close enough” that it can correct any errors in its own initial design.

This is a nice idea. It would be great if it worked. But I worry that humanity’s current state of philosophical confusion significantly increases the risk of building AIs that aren’t “close enough” and can’t fix their errors, and end up wanting to fill the world with non-sentient things that the AIs confusedly think are happy people, or something like that.

Another thing that looks kind of promising to me is experimental philosophy, or x-phi. The idea is, philosophers often try to settle arguments by saying, “well, intuitively it seems like…” Fill in the blank. Someone finally got the bright idea that instead of philosophers just using their own intuitions, they should ask statistically meaningful samples of people who aren’t philosophers what their intuitions are.

I’ve been repeatedly surprised by how much light experimental philosophy seems to shed on certain issues. If you’re interested in this, I’d look at Eddy Nahmias’ work on free will, or Joshua Greene’s work on ethics. The worry here, though, is that so few people are doing experimental philosophy, that no one is doing the experiments that might point in the opposite direction from Eddy Nahmias’ experiments, or Joshua Greene’s experiments.

Experimental philosophy may also work better for some issues with others. With AI consciousness, the problem is that only crazy people doubt that other humans are conscious, and most people are pretty sure chimpanzees are conscious, and maybe we’re not sure about insects, and then we’re pretty sure rocks aren’t conscious. That’s pretty straightforward. The problem is extrapolating from that narrow sample of things that happen to have evolved on earth, to a perfectly general theory that will tell us about the consciousness of any AI design you might dream up.

Now, there are actually scientists trying to do that. You can read articles by neuroscientists talking about what theories of consciousness are supposed to be winning the empirical predictions test, and maybe it’s a review article citing tons of sources to back up its conclusions. But then you realize different neuroscientists have different ideas about which theory of consciousness is winning, and what you thought was an empirical question, is actually a philosophical debate about which observations would support which theory.

Okay, so I asked the “what do we do about this?” rhetorical question, as if I had an answer. But actually, the truth is I have no idea what to do about this, and I’m not sure anyone does. So far, moral progress seems to have been mostly made though an informal, decentralized process of refining common sense. Maybe something like that will work for other philosophical questions, but I don’t want to bet the world on it.

I criticized Peter Thiel a moment ago, but I actually think his distinction between secrets and mysteries was a good one. I hope philosophy turns out to be made up of secrets, things we can figure out with great effort, but often it looks made up of mysteries. Either way, don’t forget that the point of secrets is that they’re hard to learn, so if you think you’ve discovered an easy way to discover the secrets of philosophy, you’re probably wrong.

The only other thought I have to say about this, is that I wouldn’t have taken the time to write this if I didn’t think it wasn’t occasionally worth having some people spend a little bit of time thinking about how humans can get better at philosophy, just in case they come up with any good ideas. Just in case.

  • Luke Breuer

    Chris, have your read Mortimer Adler’s Ten Philosophical Mistakes and/or Alasdair MacIntyre’s After Virtue? Both books point out what they believe to be fundamental errors in philosophy that have led to a broken state of philosophy, today. I could also throw in Edward Feser’s polemic, The Last Superstition: A Refutation of the New Atheism. He argues that many of the philosophical problems today arose directly because of a rejection of formal and final causation. That is, he claims the Enlightenment generated philosophical problems more than it solved them. I haven’t read any scholarly rebuttals of Feser (or the ideas he espouses); I need to find some.

    • http://outshine-the-sun.blogspot.com/ Andrew G.

      There is a review of Feser’s “The Last Superstition” here.

      I keep searching around for any good references for anti-Aristotelian / anti-Thomist material but there doesn’t seem to be much; anyone have more links?

      • Luke Breuer

        Thanks! I was struck by the following bit:

        The naturalistic world view rejects ultimate authority. That’s what it is to be a naturalist.

        This seems very odd. I’ve not once, in my thousands of hours talking to atheists and skeptics online, seen this as a primary definition of ‘naturalism’. Some might claim that it logically flows from something closer to e.g. physicalism, but Aaron Boyden seems to have raised it to be an essential part of naturalism. Perhaps you actually just knew of his review and thus have nothing to add, but I did find this curious.

        The rest of the review might be valid, but it’s awfully hard to trace it as a layman, given that he offers extremely few citations—just names and the occasional book without page number makes it pretty hard. I will bookmark it, though.

        • http://outshine-the-sun.blogspot.com/ Andrew G.

          For the ultimate authority thing, I would say instead that it is not possible for a naturalist to have an ultimate authority, without implying that someone who rejects ultimate authority must be a naturalist. But I don’t think his definition of naturalism is especially relevant to the criticism of Feser.

          • Luke Breuer

            Yeah, it just stood out to me, as a possible indicator of strongly motivated reasoning.

          • http://outshine-the-sun.blogspot.com/ Andrew G.

            OK, you owe me a new irony meter.

          • Luke Breuer

            Sure; the latest models have log/speck detection; want one of those? :-p

  • MNb

    It seems to me that the terrified moments are the answer to your cynical moments. Personally I always think it better to have an idea what terror is awaiting. I mean, there hasn’t been much discussion about nuclear bombs before 1945, has it? That thing gave Homo Sapiens a few narrow escapes – some truly terrified moments, if you think about it.
    What I mean is: philosophy may suck at giving answers, having no idea what the questions are is even worse.

  • robertwib

    “Like there’s an argument that if total utilitarianism is true, we should increase the population until we have a maximum number of people whose lives are just barely worth living.”

    I don’t think that is really accurate. The total view would only say such a world could be better than nothing, or better than a much smaller world. Fewer organisms living blissful lives would likely be much better still.

    • CarlShulman

      If you think happiness tracks log-consumption well, then it’s pretty accurate. If you could convert resources into utility in mostly linear fashion (wireheading computer programs or the like) the total and average views could come closer together, although things like robustness and the costs of preventing shutdowns/interruptions of life would still leave a wedge..

      hhttp://www.givingwhatwecan.org/blog/2013-07-30/measuring-what-matters-two-key-indicators-for-the-successor-development-goals

      • robertwib

        It’s good enough to say utility is logarithmic at high incomes, but for this we care more about welfare changes at low incomes. Because welfare is <0 until a reasonable level of consumption, the income that maximises total utility isn't as grim as all that.

        But I'm assuming that by this stage utility will be more linearly related to inputs, in which case there is no need to divide it between lots of people.

    • http://patheos.com/blogs/hallq/ Chris Hallquist

      I’ve edited this part somewhat to try to avoid saying anything overly controversial.

  • Alex SL

    Like there’s an argument that if total utilitarianism is true, we should increase the population until we have a maximum number of people whose lives are just barely worth living.

    As for robertwib, that doesn’t make sense to me either. Why should anybody care to maximise the sum of happiness points in the world by adding people, utilitarian or not? Cui bono? Surely the greatest possible happiness given the currently existing individuals is important, especially to the currently existing individuals who are, after all, the ones deciding what ethical system they want to use.

    As for mind uploading and Star Trek transporters, I think Zach Wienersmith got it right in one. It shouldn’t be called “mind uploading” but at best “mind copying”, and at the moment there are good reasons to assume that it wouldn’t work anyway. It is very possible that the brain cannot be read at the required level of detail without destroying it even before you start. It is very possible that the substrate matters for the process, so that to accurately copy a human mind you would have to copy all the trade-offs of the system including its squishiness and short lifespan. And lastly, a simulation of a fire is not a fire; it does not burn wood. It is well possible that the same is true for that other process that is called thought.

    (I don’t doubt that you could get a computer to think like a thinking computer, but a computer simulation of a human would still not be a human but only a simulation of one.)

    Oh, and by the way, if we ever did develop mind uploading, we might end up without much of a choice as to whether to upload. Uploads might be a lot cheaper to run than the cost of keeping humans alive, so there could be intense economic pressure for everyone to upload.

    Ahahaha… no. I am sure it would be very clear to the vast majority of people that the copy is a copy, and that what this would boil down to is killing them off. People tend to take a dim view of that suggestion. Also, why would the rich and powerful even need all of us? If they could push through with what you mentioned then they could just as well exterminate everybody who they don’t need without making a mind upload first.

    More generally, I find your techno-optimism puzzling. The future I see ahead is one where cheap fossil fuels run out, oceans rise, deserts expand, food becomes scarce, and if we are lucky in the long run the vastly diminished number of humans will perhaps be able to plateau at the level of technology and comfort Europe had around the time of Napoleon plus vaccines and a bit of electricity from wind power and solar cells, unless the latter are too expensive to manufacture without cheap oil subsidising them. Less comfortable future if it is run by religious wackos, of course.

    The point being, brain uploading will not be an issue because society will not be able to afford that kind of hardware anyway, and if energy is priced realistically (as in, based on how much can be produced sustainably, with regenerative sources only) running a human on a bit of food is cheaper than building and running robots with electricity.

    • JohnH2

      Long before food becomes apparently scarce we will run into situations where food prices rise in the less developed world leading to destabilization of governments in those regions, both in terms of repeated revolutions and radicalization as utopian visions are sold as solutions to the despair and disparity that people from those regions face daily. Economic downturns can also cause similar problems, even in more developed nations.

      Sort of like what is actually currently happening.

  • eric

    Another thing to do is to try to look for questions that are in the vicinity of the philosophical questions you want to answer, but aren’t the exact same questions that philosophers have been stuck on for a long time.

    Such as: let’s say I do a very detailed mind/brain scan of someone today (i.e., looking at the pattern of activity that constitutes their conscious mind), and another one of them tomorrow after they’ve had a good night’s sleep, and the patterns are different. IOW, say that we have empirical proff that when you brain reforms your conscious self each morning, it reforms it slightly differently.

    In such a case, the questions become: why I am I all angsty about the possibility that a downloading might be slightly different when “slightly different” is the brain’s standard mode of daily rebooting operation? And: why should the legal ramifications of waking up slightly differently in Star Trek world be any different than the legal ramifications of waking up slightly differently in the real world, as we all probably do?
    To pose my comment another way: our sense-of-self may be stronger than our actual, continued self. We may fool ourselves into thinking we are more continuous than we are. If our perception of personal continuity is just a bias (like not seeing our blind spot), then discontinuities of other types should not unduly bother us (you still have a legitimate accuracy/engineering problem…IOW it makes perfect sense to be bothered by a more severe daily chage than what our wetware typically causes).

  • Psycho Gecko

    Simple answer: since the transporter basically tears a person apart, the real person died the first time he was transported. Every subsequent iteration was a copy, or a copy of a copy.

    Basically, it’s like if you had an axe that had sentimental value to you, and one day you threw it in a volcano, but then you bought a new one that looked exactly like it as far as you could tell from a photo you had, and then people came along and debated if it was still the first, original axe if you needed to replace the head and the handle at some point in the future.

  • John_Maxwell_IV

    “If you’re an amateur philosophy buff inclined to have strong philosophical opinions, I encourage you to do this, arguing not just with other people in the EA movement, or the rationalist movement, but argue with other smart, well-informed people who might have different biases than you do. At least it might reduce overconfidence.”

    It also seems valuable to try to invent new positions that no one is currently arguing for. If you look at the positions people are already arguing for, your sample will overrepresent positions that, say, make for virule memes, or are good signaling, or are easy to think up, or are convenient to believe for some self-serving reason or another.

  • http://www.code.mu/ Jeff Alexander

    Most of x-phi is not a bright idea, it’s a step in the wrong direction. Philosophers shouldn’t be using intuition as evidence to anywhere near the extent they typically do, so “hey, here’s a bunch more intuition data!” is distinctly unhelpful. Unless you just want to describe human tendencies/biases/intuitions, then yeah sure go survey people or run cognitive science experiments to determine those things. But don’t suggest you’re learning something about “free will” in so doing.

    I agree that there isn’t a lot of good popular philosophy, and more would be nice, but it’s hardly surprising given the shortage of good academic philosophy.

  • Richard_Wein

    Hi. A late addition to this discussion…

    The reason a lot of philosophy seems so intractable is because of what Wittgenstein called the bewitchment of our intellect by our language. Philosophers often use language in misguided ways. Sometimes this means that they ask incoherent questions, which intuitively feel as if they should have an answer, but which don’t.

    We feel as if there must be an answer to the question “is it really me?”, but in the context of the transporter scenario this question is senseless. Since the question is senseless, there’s no answer available, and therefore ample opportunity for philosophers to give different misguided answers with a plethora of misguided justifications.

    Let’s suppose we accept that the person who walks out of the transporter at the other end is identical to me. He has the same personality, feelings, etc. Then we have accepted (by hypothesis) everything that matters about the scenario. There is nothing left to be addressed by the question, “Is it really me?”. Someone who says, “Yes, I accept that he’s identical to me, but is he really me?”, is asking an empty question. What is the difference between someone who’s really me and someone who is “just” identical to me? If you can’t tell me any difference, then why are you asking me to distinguish between those cases?

    The same goes for the “transporter accident” case, where two copies of me are made. If there’s no difference between them, then there’s no difference between them. It’s senseless to ask, “which one is really me?”. And just as in the previous case, it would be senseless to ask, “are they both really me, or neither of them?”. Again, there’s no difference between the two cases.

    Philosophers have long asked a similar question under the heading of “Theseus’s ship”. In case you’re not familiar with it, the idea is this. Over the course of its lifetime the ship has been repaired so many times that every bit of it has been replaced. Is it still the same ship as the original? Again, this is a senseless question. There are familiar contexts in which it makes sense to ask whether an object is the same one. But here we’ve stepped outside of those contexts and created a context where the concept cannot be applied in the usual way, and there is no useful alternative way to apply it. There’s nothing for us to ask about.

    In the transporter case the question seems more urgent, because it’s more personal, and perhaps because we are more inclined to believe in an essential real me than in an essential real ship. Essentialistic thinking is one of the things that leads philosophers astray.

    I think the way for philosophy to improve is for philosophers to adopt a more Wittgensteinian or “Ordinary Language” view of language. But a commitment to a more “naturalised” approach to philosophy helps too. Daniel Dennett is good at avoiding the kind of linguistic bewitchment that Wittgenstein warned against, even though his approach doesn’t seem outwardly Wittgensteinian. (He says he was influenced by Wittgenstein.)

  • Son Of Goldstein

    If you have no soul, and are just a biochemical process, then it doesn’t matter in the end.

    But if you do have a soul, and your atheism is actually a delusional, then its going to matter.


CLOSE | X

HIDE | X