Apparently, PZ didn’t read Sandberg and Bostrom very carefully

So… when I first read PZ’s recent post on uploading, it wasn’t quite clear what his objections were, but I thought maybe he had some good points. But reading his responses in the comments thread, it quickly becomes clear that while PZ claims to have read Sandberg and Bostrom’s paper, he must not have read it very carefully. 

There was, though, one warning sign in PZ’s post itself. PZ writes “I read the paper he recommended: it’s by a couple of philosophers.” Actually, while the Future of Humanity institute is associated with the faculty of philosophy at Oxford, both Bostrom and Sandberg have background in computational neuroscience (in Sandberg’s case, it’s what he got his Ph.D. in). Furthermore, they explain in the paper that it incorporates suggestions from experts in many different fields who attended a 2007 workshop at which a previous version of the paper was circulated.

Now I want to look at PZ’s objections to the paper. PZ starts off by saying, correctly, that we don’t currently have the technology to scan the brain in sufficient accuracy and detail. But the question is if and when we’ll develop the technology in the future. Possibly intended as a comment on that (it’s a little unclear), PZ writes:

What the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically modifying the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present?

PZ also comments on the issue of enhancement by running emulations with faster hardware:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

Now the comments. Andrew G. wrote:

Now, I have no particular sympathy for the brain-uploading crowd (I think they’re massively underestimating the problem), but this criticism betrays an absolute ignorance of computer simulation:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how?

It’s actually HARDER to run the simulation at a speed constrained to match real time than it is to just let it freewheel at the speed determined by the computational requirements and the available CPU power. Since the interactions between all the subunits are themselves part of the simulation, this presents absolutely no problems.

If you’re going to criticize the brain-uploaders for ignorance of biology, then it’s probably a good idea to avoid displaying an equivalent ignorance of programming.

To which PZ responded:

No, you don’t understand. Part of this magical “scan” has to include vast amounts of data on the physics of the entity…pieces which will interact in complex ways with each other and the environment. Unless you’re also planning to build a vastly sped up model of the whole universe, you’re going to have a simulation of brain running very fast in a sensory deprivation tank.

Or do you really think you can understand how the brain works in complete isolation from physiology, endocrinology, and sensation?

Responding to another commenter, PZ wrote:

I still don’t understand why biologists insist that you have to do a perfect simulation, down to the smallest molecule, and then state the obvious fact that it’s not going to happen.

Errm, because that’s what the singularitarians we’re critiquing are proposing? This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

And responding to a third commenter, he wrote:

You’re not increasing the speed of the engine, you’re increasing the speed at which the simulated reality of the engine runs.

You’re still not getting it. You’ve got a simulator modeled after all the interacting phenomena in a real brain. You’re pretending that there’s a simple slider labeled “speed” that you can adjust, but it isn’t there because of all the non-linearity in the system. Things don’t simply scale up in the same way in every parameter.

Sure, you can just arbitrarily set the time-scale of the simulation, but then you mess up the inputs from outside the simulation. And you can’t model a human brain in total I/O isolation without it melting down into insanity.

There are two issues here, whether emulation as described by Sandberg and Bostrom would work, and whether speeding up the emulation could work, but PZ’s comments on both issues seem seem to share a misunderstanding in common. The “alternative” PZ proposes in his second comment isn’t an alternative to what Sandberg and Bostrom are proposing. And that should have been clear from the table on p. 13 and comments on the following page (including the fact that most participants at the workshop thought only emulation at the 4-6 level would be necessary).

It’s precisely because we wouldn’t be doing molecule-for-molecule emulation that we wouldn’t need molecule-for-molecule emulation of the environment to get this to work. In fact, the emulations of those other things could be much less fine-grained than the brain emulation. Sandberg and Bostrom devote pp. 74-78 to these subjects, and suggest a several hundred TFLOPS may be sufficient: “While significant by today’s standards, this represents a minuscule fraction of the computational resources needed for brain emulation.” (And I have no idea why PZ thinks you’d need to emulate the whole universe; just emulate a sufficiently large sealed environment with indestructible walls.)

And that deals with the objection to speeding up the emulation. For example, a company wanting to get some design project done faster could place a bunch of emulated designers in an emulated corporate campus with sleeping, eating, and recreational facilities and speed the entire thing up as much as hardware allows. Communication with meatspace would be limited to non-real-time messaging, but so much of our communication is non-real-time anyway that I doubt that would be a big problem.

And if the corporate campus emulation idea seems morally dubious to you, well, I did note in my clarifications post that I’m apprehensive about some of this stuff. But it should be obvious that just because something is morally dubious doesn’t mean it won’t happen.

No scientific evidence for that
Alvin Plantinga, Michael Behe, and Paul Draper
So I've been flipping through The Transhumanist Reader...
Russell Blackford on human enhancement