Apparently, PZ didn’t read Sandberg and Bostrom very carefully

So… when I first read PZ’s recent post on uploading, it wasn’t quite clear what his objections were, but I thought maybe he had some good points. But reading his responses in the comments thread, it quickly becomes clear that while PZ claims to have read Sandberg and Bostrom’s paper, he must not have read it very carefully. 

There was, though, one warning sign in PZ’s post itself. PZ writes “I read the paper he recommended: it’s by a couple of philosophers.” Actually, while the Future of Humanity institute is associated with the faculty of philosophy at Oxford, both Bostrom and Sandberg have background in computational neuroscience (in Sandberg’s case, it’s what he got his Ph.D. in). Furthermore, they explain in the paper that it incorporates suggestions from experts in many different fields who attended a 2007 workshop at which a previous version of the paper was circulated.

Now I want to look at PZ’s objections to the paper. PZ starts off by saying, correctly, that we don’t currently have the technology to scan the brain in sufficient accuracy and detail. But the question is if and when we’ll develop the technology in the future. Possibly intended as a comment on that (it’s a little unclear), PZ writes:

What the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically modifying the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present?

PZ also comments on the issue of enhancement by running emulations with faster hardware:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

Now the comments. Andrew G. wrote:

Now, I have no particular sympathy for the brain-uploading crowd (I think they’re massively underestimating the problem), but this criticism betrays an absolute ignorance of computer simulation:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how?

It’s actually HARDER to run the simulation at a speed constrained to match real time than it is to just let it freewheel at the speed determined by the computational requirements and the available CPU power. Since the interactions between all the subunits are themselves part of the simulation, this presents absolutely no problems.

If you’re going to criticize the brain-uploaders for ignorance of biology, then it’s probably a good idea to avoid displaying an equivalent ignorance of programming.

To which PZ responded:

No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

Or do you really think you can understand how the brain works in complete isolation from physiology, endocrinology, and sensation?

Responding to another commenter, PZ wrote:

I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.

Errm, because that’s what the singularitarians we’re critiquing are proposing? This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

And responding to a third commenter, he wrote:

You’re not increasing the speed of the engine, you’re increasing the speed at which the simulated reality of the engine runs.

You’re still not getting it. You’ve got a simulator modeled after all the interacting phenomena in a real brain. You’re pretending that there’s a simple slider labeled “speed” that you can adjust, but it isn’t there because of all the non-linearity in the system. Things don’t simply scale up in the same way in every parameter.

Sure, you can just arbitrarily set the time-scale of the simulation, but then you mess up the inputs from outside the simulation. And you can’t model a human brain in total I/O isolation without it melting down into insanity.

There are two issues here, whether emulation as described by Sandberg and Bostrom would work, and whether speeding up the emulation could work, but PZ’s comments on both issues seem seem to share a misunderstanding in common. The “alternative” PZ proposes in his second comment isn’t an alternative to what Sandberg and Bostrom are proposing. And that should have been clear from the table on p. 13 and comments on the following page (including the fact that most participants at the workshop thought only emulation at the 4-6 level would be necessary).

It’s precisely because we wouldn’t be doing molecule-for-molecule emulation that we wouldn’t need molecule-for-molecule emulation of the environment to get this to work. In fact, the emulations of those other things could be much less fine-grained than the brain emulation. Sandberg and Bostrom devote pp. 74-78 to these subjects, and suggest a several hundred TFLOPS may be sufficient: “While significant by today’s standards, this represents a minuscule fraction of the computational resources needed for brain emulation.” (And I have no idea why PZ thinks you’d need to emulate the whole universe; just emulate a sufficiently large sealed environment with indestructible walls.)

And that deals with the objection to speeding up the emulation. For example, a company wanting to get some design project done faster could place a bunch of emulated designers in an emulated corporate campus with sleeping, eating, and recreational facilities and speed the entire thing up as much as hardware allows. Communication with meatspace would be limited to non-real-time messaging, but so much of our communication is non-real-time anyway that I doubt that would be a big problem.

And if the corporate campus emulation idea seems morally dubious to you, well, I did note in my clarifications post that I’m apprehensive about some of this stuff. But it should be obvious that just because something is morally dubious doesn’t mean it won’t happen.

  • eigenperson

    I find it difficult to address these proposals in good faith, and I think PZ has the same issue. When you’re trying to scratch out a career doing very difficult reality-based work, it’s frustrating to see other so-called researchers sailing happily by on fluffy clouds of speculation.

    That said, in the end the practicability comes down to how much detail it is necessary to include in the simulation. I tend to agree with PZ, because I think the behavior of the human brain is sensitive to atomic-scale details within neurons. The WBE researchers do not think so. Papers like Sandberg and Bostrom’s basically consist of the authors saying “Hey! If we need amount D1 of detail, then it will take P1 computing power, but if we need D2 of detail, then it will take D2 computing power.” The problem I have with this is that I think it will take amount D99 of detail, and if the authors consider amount D99 at all, the possibility that it might actually be necessary for fidelty is mentioned and then blithely ignored for the remainder of the paper.

    • http://www.facebook.com/chris.hallquist Chris Hallquist

      First of all: have you read Sandberg and Bostrom? You imply you think their paper was not difficult to write or reality-based, are you basing that judgment on having actually read it? I’m glad to have people doing the kind of work PZ does, but that doesn’t mean things like Sandberg and Bostrom’s roadmap isn’t valuable. Such things are valuable because they help with long-term thinking about the future of technology.

      As for your second paragraph, well, if you think we need amount D99 of detail, by all means say so and explain why. But I don’t think that’s much of a criticism of the paper. They report talking with a lot of experts about the needed amount of detail, and the people they talked to generally thought it would be around what they call the 4-6 level (what you might call D1 or D2). Now, that may not be much of a guarantee that the 4-6 level is all that will be necessary, but there are too many people out there for them to address everyone’s viewpoint in one paper, and given that they did try to take many other people’s viewpoints into account, it’s unreasonable to complain about their failure to spend more time on your viewpoint.

      • eigenperson

        I have read parts of their paper, but not the whole thing. I certainly can’t say anything about the parts I haven’t read, because those parts could literally say anything at all. In fact, it seems like much of the paper consists of a review of existing knowledge and technology. I really don’t have a problem with that kind of stuff. In fact, it is a useful resource.

        The only parts I can comment negatively on are a handful of parts I have read that go beyond that, and to my eyes these parts read like speculative fiction (to be sure, carefully-researched speculative fiction). For example, if I turn to a pseudorandom page (in this case, p. 29), I find this quote:

        Virtual test animals, if they could be developed to a sufficient degree of realism, would be high in demand as faster and less expensive testing grounds for biomedical hypotheses (Michelson and Cole, 2007; Zheng, Kreuwel et al., 2007)

        I have no doubt this sentence is true, but the combination of the words “if” and “sufficient” make it essentially vacuous. Obviously, sufficiently advanced virtual test animals could substitute for real animals in any application whatsoever, because “sufficiently” expands to encompass whatever qualities are required.

        What is most infuriating here, at least to me, is that two papers are cited for this completely vacuous statement. I don’t object to Sandberg and Bostrom citing the literature, of course; what bothers me is that (1) two papers comprising at least 5 authors were required to establish this obvious and vacuous statement, and (2) those papers were considered worthy of note, enough so that a paragraph had to be inserted into the paper to mention their results.

        Here’s another one, from page 65:

        About 200 chemical species have been identified as involved in synaptic plasticity, forming a complex chemical network. However, much of the complexity may be redundant parallel implementations of a few core functions such as induction, pattern selectivity, expression of change, and maintenance of change (where the redundancy improves robustness and offers the possibility of fine-tuning) (Ajay and Bhalla, 2006).

        Here again we have complete speculation. The qualifying words here are “much” and especially “may”, which make the sentence content-free. And again we have a citation for this piece of speculation.

        While I recognize that even vacuous statements like these have some value, because they focus attention on certain possibilities, it’s important to recognize that they are merely possibilities, and that there is little to no evidence at hand.

        Of course, hypothesizing is an important part of science and should not be neglected, but what is going on here is a bit more than hypothesizing. Sandberg and Bostrom create or relate a vast number of hypotheses in their paper, all or most of which have to be true for WBE to be feasible. Furthermore, it is clear that most members of this family of hypotheses were invented with the express purpose of enabling WBE. This, to me, is an excellent way to write speculative fiction, where the author comes up with an intriguing and provocative scenario (e.g. “Aliens transform Mars into a spaceship”) and then invents a large amount of speculative science to bring the scenario into the realm of possibility. However, this approach is as inappropriate for science as it is appropriate for science fiction.

      • eigenperson

        After posting I realized that I failed to address your second paragraph.

        I don’t expect Sandberg and Bostrom to address my point of view on the issue. The reason I don’t expect them to address it is that if my point of view is correct, WBE is basically flat-out infeasible. Therefore, my point of view has no place in the speculative world they are constructing in the paper. I understand that if their paper is to be written at all, it can only address my point of view by either falsifying it, denying it, or hoping it’s false. If they can falsify it, then I guess I’m wrong. But they haven’t falsified it here — they’ve gone for something midway between “denying” and “hoping it’s false” instead.

        In short, I don’t object to their dismissal of my idea in their paper, since there could be no paper without that dismissal. What I object to is the fact that “hoping it’s false” is apparently good enough in this field. It certainly isn’t in mine (mathematics)!

  • Jean

    I would also agree with eigenperson and PZ about the level of details that would be needed and seems to be minimized or ignored.

    And I can’t help but think about the raelian equivalent when I read about uploading. It’s not presented in the same context but I wonder if it’s mainly a rationalization of what is basically (and unconsciously?) a desire for eternal life.

  • Midnight Rambler

    Okay, I just skimmed through it (I just came through this post, so I didn’t have time to read the whole thing). No doubt it was difficult to write even though it’s wrong, just like jazz is difficult to play but still sounds like garbage. The key problem is on pp. 13-14, where they talk about the level of resolution needed. This is where they reveal their (and the entire panel’s) ignorance of how brains work, and of the absolute impossibility of achieving what they claim.

    They say they think they can get usable scans with level 6 (measuring metabolites and neurotransmitters in compartments, but not needing protein and gene expression levels; and some think even less is needed!). In reality, to be able to reconstruct things like thoughts and memories – i.e., a brain – you would need at least level 10 (molecule positions, including ions), and more likely 11 (quantum states). Not only is that impossible with any currently foreseeable technology, it requires that the brain be active and alive – once it’s dead, everything starts to fall apart immediately, before you have the slightest amount of time to start slicing and dicing.

    Note that they spend very little time in this large paper discussing their rationale for why they chose level 4-6 as the target. Most likely, because anything beyond 7 is going to be virtually impossible even with a dead brain, and anything from 9 and up becomes impossible except with a live one. All of which makes the rest of the discussion moot.

    • http://www.facebook.com/chris.hallquist Chris Hallquist

      I suspect strength of particular synaptic connections was meant to be included in level 5, which includes “membrane states (ion channel types, properties, state).” But given that that seems to be governed by by ion channel concentrations, maybe Sandberg and Bostrom should be interpreted as including that in level 7.

      In any case, though, I have no idea why you’d think level 10-11 is necessary, unless you’re imagining the dynamics of the emulation would just be the laws of physics alone. But as I understand them, Sandberg and Bostrom are imagining including empirical generalizations about the behavior of the components being included in the dynamics of the emulation. (I wonder if that’s the big confusion here, people assuming the dynamics of an emulation would necessarily just be the laws of physics.)

  • consciousness razor

    It’s precisely because we wouldn’t be doing molecule-for-molecule emulation that we wouldn’t need molecule-for-molecule emulation of the environment to get this to work.

    How much can we honestly extrapolate about what is necessary (for whatever it is) from an informal poll at a workshop, if we want to do more than handwaving and get the real thing to actually work? PZ probably didn’t read the paper very carefully, and perhaps he should. Do you think doing so means he would have agreed with the results of a poll?

    What do you, or some people at a workshop, think levels 7-11 are unnecessary for? (Serious question. I couldn’t figure it out from reading that part of the paper.)

    What do they do in the brain which we wouldn’t need (for making self-aware slaves in a cheap simulated box to do our bidding, or maybe something which could be justified as morally acceptable if we’re tired of handwaving)? Can we just leave out “concentrations of proteins and gene expression levels” altogether when we make a functioning cognitive system? And if so, how do we know that, other than some people in the field who really want it to happen having a hunch about it? If that sort of thing does need to be modeled but only approximately, then what about it needs to be approximated, how is that known, and how could that be done?

    • http://www.facebook.com/chris.hallquist Chris Hallquist

      If you look at PZ’s second comment, his “alternative” (which is actually compatible with Sandberg and Bostrom’s general view) is to focusing on modeling things the size of the nucleus accumbens, which puts his view up at level 2, maybe level 3. Maybe if he thought about it more he’d agree with the results of the poll, IDK.

      • eigenperson

        I can’t speak for PZ, but from reading his posts I suspect his view is something like this:

        You can do a coarse model of the brain, and in that way get something that can process information much like a human. But it would be, at best, a generic “human-like entity” rather than a model of a particular person (which is what uploading needs).

        Or, you can instead do a medium-fine model of the brain, and in that way get something that behaves somewhat like a specific human. But it still wouldn’t be a high-enough fidelity copy to provide psychological continuity, which I’m pretty sure the uploaders also require.

        Or, you can instead do a very fine-grained model of the brain, except you can’t, because you don’t have the computing power. [At this moment I can't recall whether PZ thought it was possible even in theory to have a digitized brain provide psychological continuty; I remember having an argument about this issue in Pharyngula comments, but I can't remember whether PZ agreed or disagreed with me.]

        • http://www.facebook.com/chris.hallquist Chris Hallquist

          That would make some sense of his statements. Though his objections to finer-grained simulations seemed more to be in the area of scanning accurately enough.

          • wanderfound

            “Scanning accurately enough” in this situation is a task likely to be inaccessible to anyone short of one of E.E. “Doc” Smith’s Arisians.

  • echidna

    From a footnote in the paper:

    Since even a three neuron system can become chaotic (Li, Yu et al., 2001) it is very plausible that the brain contains chaotic dynamics and it is not strictly simulable.

    I read PZ as saying that, from a biological perspective, the brain is very unlikely to be simulable. The paper under discussion discusses increasing computational power at length, but really not so much at the biological modelling level.

    The research is in it’s infancy:

    Even a database merely containing the complete “parts list” of the brain, including the morphology of its neurons, the locations, sizes and types of synaptic connections, would be immensely useful for research.

    The paper assumes that modelling the function of the brain is a tractable problem. PZ is saying that it is intractable, due to all sorts of messy biological concerns.

    I don’t think it’s fair to say that PZ hasn’t read the paper carefully. If the processes are chaotic, and PZ’s comment about the universe implies that he thinks it is, then the whole thing is as impossible as a perpetual motion machine. It is not necessary to read every word of a paper on a perpetual motion machine in order to dismiss it, and PZ has not shown any signs of misunderstanding the paper.

    • http://www.facebook.com/chris.hallquist Chris Hallquist

      So “chaotic” means “highly sensitive to initial conditions.” The question then is what aspects of behavior are chaotic? It wouldn’t surprise me if the brain is chaotic in the sense that some of the more unpredictable aspects of our behavior are basically deterministic (not influenced by quantum effects), but highly sensitive to initial conditions at some level above the quantum level.

      However, that doesn’t mean arbitrarily small changes in a brain would be enough to rob someone of their ability to write computer code or whatever. So merely being chaotic in some ways is not a barrier to getting brain emulations able to fill human jobs.

      • echidna

        Being chaotic would not rob you of writing computer code, but it might change some of the decisions you make while doing so. Think of water flow through a pipe. The water will go through the pipe whether the flow is laminar or turbulent, but you can’t model the fine detail of turbulent flow.

        • http://www.facebook.com/chris.hallquist Chris Hallquist

          Yeah. But for the issues I’ve been discussing, this is not a big deal if the changes are, on average, neutral WRT the quality of the code being produced.

          • echidna

            Chris, the scope of my comments don’t include what you wrote in the earlier post; the scope is the paper and your comments directly about it. Modelling people’s behaviour based on studying the brain to be able to do jobs is a completely different matter to a brain upload.

      • wanderfound

        A brain emulation that isn’t a replica of a specific person isn’t an upload, it’s an AI.

        As far as I know, PZ has never suggested anywhere that AI is impossible or even exceptionally difficult. Expert systems can already outperform human cognition in many domains.

  • Dunc

    Considering that implementations of genetic algorithms on FPGAs with as few as 100 gates have produced working systems which we can’t actually simulate, understand, or reliably transfer to different supposedly-identical chips, I’m really rather sceptical about the whole concept of successfully modelling an entire human brain from looking at it in bits…

    • D-Dave

      Considering that implementations of genetic algorithms on FPGAs with as few as 100 gates have produced working systems which we can’t actually simulate, understand, or reliably transfer to different supposedly-identical chips, I’m really rather sceptical about the whole concept of successfully modelling an entire human brain from looking at it in bits…

      ^ This!

      I’m apparently not the only one who is reminded of the work of Dr. Adrian Thompson. Here’s a summary of the experiment – it’s quite fascinating.

  • Alex SL

    I haven’t read the paper, nor do I know enough about programming and hardware to know exactly how unrealistic uploading is, but I think at the end your are kind of conflating two issues.

    The point of simulating a few designers in a sped up environment is, if you get right down to it, just building a design AI. I have never understood why we should even want to construct human-like AI because it seems so pointless. Want a good designer AI? Make it good at designing, but it does not need to be able to appreciate poetry or tie shoelaces. So to build AI that makes all of us superfluous, we do not need to be able to simulate human brains, and if that is what this is about the whole is-brain-simulation-possible angles is irrelevant.

    On the other hand, I thought “uploading” refers to something entirely different: Making a copy of your mind so that “you” get to be immortal.

    (Phrasing it this way shows, of course, the entire absurdity of the premise, which would have to rely on dualism to make sense. Even if it worked, it would be copying. You would wake up and wonder why nothing has changed, and then you grow old and die while your simulated twin continues to exist until the next 1859-level sun storm, when they will be wiped out. Or if it is a destructive process, you die to have a virtual twin of yourself made. But that is of course not the topic of this post, feasibility is.)

  • Shadow of the Hedgehog

    Uh-oh. Somebody is going to be quoted out of context in comic sans serif, with a Monty Python twit illustration in the upper left corner, while butt-hurt pharyngaloids swarm in the comments section soon.

  • http://newstechnica.com David Gerard

    This post appears to say “Hah, PZ must not have read the paper properly, because he fails to address every point it raises well enough to convince me personally!” This reads as similar to the objections of theologians to Dawkins for not taking their every detail seriously. Similarly, he doesn’t actually have to convince you; he just has to explain why the speculation is ridiculous.

    Really, it’d be great if this stuff worked. I’d love it to work.

    • http://www.facebook.com/chris.hallquist Chris Hallquist

      No no no… the tldr on this is that PZ suggests an “alternative” which doesn’t actually appear to be an alternative to what Sandberg and Bostrom are suggesting, which in turn suggests he didn’t understand them well.

      • http://newstechnica.com David Gerard

        I’m unconvinced. He’s raising practical unfeasibility, you’re countering that there isn’t philosophical impossibility – and adding Courtier’s Reply to substantiate your claim that he hasn’t answered philosophical impossibility, and that this is relevant. The point, however (and why the Dawkins vs theologians comparison is apposite) is that practical unfeasibility screens off a lack of philosophical impossibility, or even a lack of provable violation of physics, in practical terms.

        That is, the claim “you haven’t shown philosophical impossibility!” does not make the possibility larger than epsilon and thus worthy of even a drop of attention. Even demanding people take the idea seriously is close to a Pascal’s scam in terms of payoff for granting attention to it.

        • http://www.facebook.com/chris.hallquist Chris Hallquist

          So PZ and I agree that there’s no philosophical impossibility here. I stressed that in my first post because some people do object philosophically to uploading. But I’m not using “there’s no philosophical impossibility” here as a rebuttal to PZ.

          Nor am I merely complaining that PZ didn’t read Sandberg and Bostrom very carefully. My main point is that some of PZ’s remarks suggest he doesn’t actually have any disagreement from them.

          • http://newstechnica.com David Gerard

            No, PZ has one important disagreement: that this idea is presently worth taking seriously at all. He is making the case that it’s so stupidly infeasible that it really isn’t.

  • sqlrob

    There was a comment on that thread that I think used an excellent analogy.

    Take a smart phone, “upload” it as they propose. Now, what’s the phone number, name and avatar of the 3rd person on the list?

    That’s a much more tractable problem than the brain, yet it’s still not particularly easy, if even possible with current tech.

    • http://newstechnica.com David Gerard

      I raised that point on LessWrong. (Chris is over in said discussion thread as well.) How do you keep the slicing-up from destroying the one bit you know you want, i.e. the phone’s flash memory?

      On what basis are you (in general) so confident there is nothing important to identity in the brain that is similarly fragile? I note another comment on LessWrong a couple of months ago from a neuroscientist who seems to consider it really immediately trivially obvious to people in the field that the encoding of identity in the brain is incredibly fragile, and that you can’t switch off the brain and get it back (coma not counting as switching it off). The comment was specifically about cryonics, but certainly applies to any other method of preparing the brain for slicing and analysis. The practical scientific substance of PZ’s post raises very similar issues, based on what PZ actually does in the lab every day in these p.

      • http://newstechnica.com David Gerard

        … excuse typo on end, cut “in these p”.

  • Pingback: yellow october


CLOSE | X

HIDE | X