Why I'm Skeptical of the Singularity

In 1965, Intel co-founder Gordon Moore made a famous observation: that the speed of computer hardware (to be precise, the number of transistors that can be packed onto an integrated circuit) tends to double every two years. In the four decades since, Moore’s law has held true with remarkable accuracy. The technology to fabricate ever-smaller logic elements has steadily improved, leading to astounding increases in computer speed. The memory, bandwidth, and processing power available today in even an ordinary desktop machine surpasses the most powerful computers used by the government and industry of yesterday.

Some sci-fi writers and futurists have foreseen a truly strange consequence of this progress. They anticipate that, assuming the trend of exponential growth continues, we will eventually – perhaps soon – reach the point where we can create machines with more computing power than a human brain. This innovation will lead to true artificial intelligence, machines with the same kind of self-consciousness as human beings. And reaching this point, it is believed, will trigger a technological explosion, as these intelligent machines design their own, even more intelligent successors just as we designed them. Those successors will in turn design yet more intelligent successors, and so on, in an explosive process of positive feedback that will result in the creation of truly godlike intelligences whose understanding far surpasses anything that ordinary human minds can even conceive of. This event is dubbed “the Singularity” by those who imagine it, for like the singularity of a black hole, it is a point where all current understanding breaks down. Some prognosticators, such as Ray Kurzweil (author of The Age of Spiritual Machines) think the Singularity is not only inevitable, but will occur within our lifetimes.

As you might have guessed from the title of this post, I’m not so optimistic. The Singularity, like more than a few other transhumanist ideas, has more than a whiff of religious faith about it: the messianic and the apocalyptic, made possible by technology. History has a way of foiling our expectations. The number of people who have confidently predicted the future and have been proven completely wrong is too great to count, and so far the only consistently true prediction about the future is that it won’t be like anything that any of us have imagined.

The largest immediate obstacle I see to Singularity scenarios is that we don’t yet understand the underlying basis of intelligence in anything close to the level of detail necessary to recreate it in silicon. Some of the more hopeful believers predict a Singularity within thirty years, but I think such forecasts are wildly over-optimistic. The brain is a vast and extremely intricate system, far more complex than anything else we have ever studied, and our understanding of how it functions is embryonic at best. Before we can reproduce consciousness, we need to reverse-engineer it, and that endeavor will dwarf any other scientific inquiry ever undertaken by humanity. So far we haven’t even grasped the full scope of the problem, much less outlined the principles a solution would have to take. Depending on progress in the neurological sciences, I could see it happening in a hundred years – I doubt much before that.

But that, after all, is just an engineering problem. Even discounting it, there’s a more profound reason I doubt a Singularity will ever occur. The largest unexamined assumption of Singularity believers is that faster hardware will necessarily lead to more intelligent machines, so that all that’s required to create a godlike intelligence is to fit more and more transistors on a chip. In response, I ask a simple question: What makes you believe the mere accumulation of processing power will produce greater understanding of the world?

Fast thinking may be a great way to generate hypotheses, but that’s the less important half of the scientific method. No matter how quickly it can think, no intelligence can truly learn anything about the world without empirical data to winnow and refine its hypotheses. And the process of collecting data about the world cannot be accelerated to arbitrary rates.

The pro-Singularity writings that I’ve read all contain the implicit and unexamined assumption that a machine intelligence with faster processors would be not just quantitatively but qualitatively better, able to deduce facts about the world through sheer mental processing power. Obviously, this is not the case. Even supercomputers like Blue Gene are only as good as the models they’re programmed with, and those models depend upon our preexisting understanding of how the world works. The old computer programmer’s maxim – “garbage in, garbage out” – succinctly sums up this problem. The fastest number-cruncher imaginable, if given faulty data, will produce nothing of meaningful application to the real world. And it follows that the dreamed-of Singularity machines will never exist, or at least will never be the godlike omnisciences they’re envisioned as. Even they would have to engage in the same process of slow, painstaking investigation that mere human scientists carry out.

This isn’t to say that artificial intelligence, if we ever create it, will be entirely useless. In virtual-reality software worlds, which are precisely defined and completely knowable, they might be able to create wonderful things. In the real world, I foresee them flourishing in the niche of expert systems, able to search and correlate all the data known on a topic and to suggest connections that might have escaped human beings. But I reject the notion that, as general-purpose intelligences, they will ever be able to far surpass the kind of understanding that any educated person already possesses.

About Adam Lee

Adam Lee is an atheist writer and speaker living in New York City. His new novel, Broken Ring, is available in paperback and e-book. Read his full bio, or follow him on Twitter.

  • http://deconbible.blogspot.com bbk

    Oh no! Wait for it… here come the commenters :P

  • http://www.musieonart.com nfpendleton

    While I understand your point, I’m still hopeful for some form or another of “trans-humanism.” I think it’s just natural that such an adaptive and inventive creature like the human would become the master of their own evolution in any number of varying ways all at the same time. I personally think that tech advancement and ecological pressures will eventually require some such union. Not quite Kurzweil’s radical vision, nor probably in his short timeframe, but I’d be wary to never say never.

  • Samuel Skinner

    Hey- I only support those guys for the computer games! We are going to have photorealism soon. I can’t wait for the features and intelligence to catch up.

    As for the singularity… well, robots and processing power will lead to big booms, but given the fact that people increase their demands as more resources are made available, I’m not forseeing a utopia any time soon. Of course, being filthy rich is good enough for me.

  • Adrian

    There is reason to think that Moore’s curve has flattened out. We continue to get improvements via cache and faster memory access but the CPU speeds have not sped up in many years. The problems of heat dissipation or energy use is proving too big to hurdle quickly. Instead, the gains have come from multi-cores but it’s extremely difficult to write programs which can take advantage of multiple processors. In the past you could just boost the CPU speed and everything would run faster, but if you slap on more cores, you have to do some sophisticated, error-prone and time-consuming work to add properly synchronized threads. We’ll see systems begin to take advantage of this, but the slope of the curve will flatten more and more, unfortunately.

    If we look to the super computers as a vision of the future, we can see that massive parallellization is the current path, and that won’t be easy for most applications.

  • http://neuraltransmission.wordpress.com Neural Transmissions

    These are old arguments.

    The largest immediate obstacle I see to Singularity scenarios is that we don’t yet understand the underlying basis of intelligence in anything close to the level of detail necessary to recreate it in silicon.

    This is merely an argument about when greater-than-human intelligence will be instantiated in machines. With exponential increases in technology, computational power, and knowledge, it is only a matter of time — some amount of time, even if it’s not in our lifetimes — before we unravel the remaining mysteries of consciousness and can emulate the human brain at nanoscale resolution.

    The largest unexamined assumption of Singularity believers is that faster hardware will necessarily lead to more intelligent machines, so that all that’s required to create a godlike intelligence is to fit more and more transistors on a chip. In response, I ask a simple question: What makes you believe the mere accumulation of processing power will produce greater understanding of the world?

    No reasonable Singularitarians believe that. They are quite aware that learning algorithms will be necessary to produce artificial general intelligence. Even if we don’t know how to write the code, we could emulate human brains, or we could use genetic algorithms to build them (after all, it is already proven that evolution can build intelligence), although the last option is not desirable. However, I do think we will be able to write algorithms using probabilistic reasoning that can vastly outperform the innate psychological biases of human brains.

    The pro-Singularity writings that I’ve read all contain the implicit and unexamined assumption that a machine intelligence with faster processors would be not just quantitatively but qualitatively better, able to deduce facts about the world through sheer mental processing power.

    Then you haven’t done a lot of reading. I direct you to the Singularity Institute for Artificial Intelligence and the works of Eliezer Yudkowsky just to start. Designing the write code is precisely the problem they want to solve.

    In the real world, I foresee them flourishing in the niche of expert systems, able to search and correlate all the data known on a topic and to suggest connections that might have escaped human beings.

    Aggregating expert systems is one way of producing general intelligence. After all, the human brain isn’t universally domain-general, either. It consists of over a hundred domain-specific modules.

    But I reject the notion that, as general-purpose intelligences, they will ever be able to far surpass the kind of understanding that any educated person already possesses.

    Then you lack imagination. I direct you to the most common transhumanist example: as late as 1900, many people thought that heavier-than-air flight was impossible.

  • Jim Baerg

    & then there’s the (perhaps overly) pessimistic observation – “If the brain was simple enough for us to understand it, we would be too simple to understand it”.

    One of the major boosters of the singularity idea also considers reasons we might not get a singularity including

    A plausible explanation for “Singularity failure” is that we never figure out how to “do the software”

    http://www-rohan.sdsu.edu/faculty/vinge/longnow/

  • http://deconbible.blogspot.com bbk

    Ebonmuse, I always enjoy responding to your posts about technological singularity. First thing I always point out is that, as you’ve stated, the vast majority of it is science fiction. Science fiction is allowed to have a Utopian or Distopian undertone to it, that is its entire appeal. I think you’re over-stating the enjoyment that people get from their daydreams about this subject into something more sinister and anti-scientific. If we find out that people are selling their homes and investing in vaporware singularity start-ups, I would worry. But a couple speeches at a TED convention and a slew of scifi novels don’t amount to much. What really concerns me are the hordes of Star Trek fans because it is such a vapid, corny, and mind-numbing form of science fiction that it gives all of science fiction a bad name.

    That said, I think you are limiting your imagination about what is possible and the many ways that it could happen. Your criticism really doesn’t address what is possible from an engineering standpoint and in some places it has a tinge of mysticism to it as well. The brain isn’t quite so mysterious and incomprehensible as you make it sound. From a scientific standpoint, we can estimate how much computing power it’s capable of and we can predict the accuracy of the tools that we need to fully examine its workings without missing the details. Being able to make educated guesses about the size of these problems, we can apply it to the historical rate of technological growth and come up with a prediction for when we will be able to, in fact, reverse engineer these biological processes.

    Just a decade or so ago, the Human Genome project was mocked as being destined for failure because with the technology available at the time, mapping the genome would have taken centuries. It took a matter of years. The scientists who undertook that project were astute enough to know that with Moore’s law, they could predict a much faster completion of their project. And as it turned out, they were more correct by actually taking technological growth into account than if they took technology as a constant the way their critics did. In this respect, Singularity speculation is bound to be more correct about what will be possible than critics who simply don’t grasp the magnitude of exponential growth. Moore’s law, to put it into an equation, is T = 2^(n/2), where T is the level of technology and n is the number of years since the beginning. Plug that into your calculator and try out a few years to see how fast this grows. For now, maybe not for predictions out to 100 years, but for the foreseeable future within our lifetimes we should at least consider what will happen if this rate of growth remains valid.

    I said that the brain probably isn’t quite as complex and mysterious as you make it out to be. Consider that the chess algorithms that defeated grandmasters were designed to run on machines with the computational power of perhaps a gerbil. Your home computer has the processing power of something similar to a goldfish. I personally find that to be quite remarkable – when human ingenuity is applied to a hamster sized computer, the humans can solve grandmaster sized problems. When humans finally get to apply engineering to a human sized computer, you can’t rule out that they won’t be able to do human-sized computations. The problem doesn’t lie so much in the sheer complexity and mysteriousness of the problem itself, as the brain can in theory be reduced to a state machine, the problem for us currently is the size of the problem versus our available computational abilities.

    If it’s not enough for you to imagine a computer capable of human sized computations, imagine one twice as powerful – and give it an extra 2 years for it to become a reality. At some point, you will have to admit, humans will be able to take even the simplest, most inefficient algorithm and apply a massive amount of computational power to solve problems which are currently impossible to solve. So yes, given nothing more than a sheer glut of additional transistors, we will still be able to solve entirely new sets of problems that are currently unachievable. And doing that, it will in fact lead to technological growth that we currently have a very difficult time imagining.

  • John Nernoff

    I’m skeptical of the entire gamut of human intelligence and our ability to figure out any of the big things of the universe. After all, our brains weigh a measly 1300 grams, compared to untold trillion upon trillions of grams and components “out there.” There are black holes with diameters that exceed the orbital diameter of pluto. There are billions of galaxies of billions of stars each. Dark matter and dark energy are further enigmas. The universe is expanding at an accelerated pace, unexpected only a few years ago. Every few years it seems, greater and greater questions are being raised. And we have been around for a picayune 100,000 years only, compared to the vastly longer age of everything — or are we wrong about that too?

    In keeping with atheism here, that god-did-it is the most ridiculous explanation of all. But I think there little use of our hoping that any true explanation of everything (aka “TOE”) will ever emerge from our little minds.

  • http://deconbible.blogspot.com bbk

    Adrian, I need to clarify what you’ve said about multi-core computers. If you’re talking about your typical quad-core Xeon business server, yes, you’re right, it’s difficult to get software from a multitude of commercial vendors and custom software written by consultants to seamlessly take advantage of the additional power. I have a difficult enough time training contractors to take advantage of simple programming language features, let alone to make intelligent use of multi-threaded applications. But some software, such as databases, virtual machine servers, and various enterprise architectures absolutely thrive on multi-core systems right out of the box. Transparently to the programmer.

    If, on the other hand, you’re referring to the world’s leading supercomputers which are currently built using thousands upon thousands of inter-connected processors and you refer to the ability of sophisticated weather, economic, chemical, fluid-dynamic, astro-physical, and the plethora of other problems that they’re being used to solve, you’d just be plain wrong. Multi-processor computing is a reality. Even if size constraints have limited what fits on a single piece of silicon, the amount of transistors being produced for a buck is still following Moore’s law.

    Some problems are in fact impossible to solve using multiple processors. They simply can’t be subdivided and solved in parallel. However, there is no indication that something such as a simulation of the human brain is such a problem. I would say that massive parallelism is actually a requirement when it comes to an architecture where so many inter-connecting pieces have to function side by side and the final outcome may in fact depend on the exact sequence of events taking place.

    I hope that helps clarify the situation.

  • http://www.daylightatheism.org Ebonmuse

    Neural Transmissions’ comment is an excellent example of the transhumanist fallacy I discussed in my post, so I want to concentrate on it a bit:

    No reasonable Singularitarians believe that. They are quite aware that learning algorithms will be necessary to produce artificial general intelligence.

    This is exactly the point I’m making: Singularitarians believe that greater and greater intelligence can be achieved simply by improving the design of the brain. I explained in my post why this is incorrect. To assume that faster processors necessarily equate to superior understanding of the external world is just belief in oracles in another form. It’s magical thinking, and it’s no less magical if we transfer the locus of competence from hardware to “learning algorithms”.

    No matter how fast you can think, you’re still intelligent only insofar as you possess accurate information about the external world. The best learning algorithm in the multiverse can still only learn the facts it’s presented with, and as I said, that means using the same slow, painstaking scientific method that human beings use now. You can’t gather data more quickly by increasing the clock speed. Nor can these hypothetical algorithms automatically differentiate between true and false information. Turn the whole Earth into computronium if you want, and you still won’t have solved the GIGO problem: seed your Singularity with incorrect data, and it will churn endlessly and produce massive volumes of beautiful, elegant hypotheses that are of no application whatsoever to the real world.

    I also wanted to comment on this by bbk:

    Just a decade or so ago, the Human Genome project was mocked as being destined for failure because with the technology available at the time, mapping the genome would have taken centuries. It took a matter of years.

    I don’t know who thought the Human Genome Project was going to take centuries, but sequencing the genome was always just a matter of reading off the nucleotide bases. It should be surprising to no one that we found ways to make the process more efficient.

    But, of relevance to my argument, does that mean that we understand human beings fully now? Not at all! The genome is highly compressed, and contains a lot more implicit information than explicit information. We’ve only just begun to truly unravel how bodies are built and how the products of the genome work, via enormous research projects such as the Human Proteome Organization, the Human Metabolome Project, and the Human Epigenome Project. We’re going to be at this for a long time. Our ability to solve problems has increased, yes, but so has the complexity of the problems we face.

    And this applies with a particular vengeance to the brain, since we have every reason to expect that, unlike the genome, the components of the brain are highly interdependent and nonlinear. Once you sequence a gene, you know what the sequence is, regardless of what’s going on elsewhere. We may well have to understand the brain fully in order to understand even a part of it.

    I said that the brain probably isn’t quite as complex and mysterious as you make it out to be. Consider that the chess algorithms that defeated grandmasters were designed to run on machines with the computational power of perhaps a gerbil.

    Again, chess is a limited problem domain. All the possible moves, and their outcomes, are precisely knowable. This does not hold true in an analogous way to many real-world problems. If you take your chess-playing computer and speed it up, then you’ve built a better chess player. I’m not convinced that this conclusion can be straightforwardly transferred to a general-purpose intelligence. How do we know that a faster intelligence wouldn’t just be overwhelmed and paralyzed by its ability to generate an enormous number of competing hypotheses more quickly than it could verify them?

  • Adrian

    bbk,

    But some software, such as databases, virtual machine servers, and various enterprise architectures absolutely thrive on multi-core systems right out of the box. Transparently to the programmer.

    Well, yes and no. Yes, the end consumer can buy software that is multi-threaded and can scale to many cores. What you aren’t seeing is the incredible amount of work that went into building that system and keep it synchronized. There must still be synchronization and data integrity and so there are limits even here to how much can be parallellized. With a work and luck, we may decompose further systems this way but don’t be fooled into think this is straight-forward, and don’t think that just because several applications have done it that all applications can do it.

    However, there is no indication that something such as a simulation of the human brain is such a problem. I would say that massive parallelism is actually a requirement when it comes to an architecture where so many inter-connecting pieces have to function side by side and the final outcome may in fact depend on the exact sequence of events taking place.

    It sounds like you’re starting from the position that we can successfully decompose any problem into many parallel components and we need evidence to show this isn’t the case. I think this is backwards. We don’t know near enough about what problems are suitable for multithreading and the problems of synchronization are inherently difficult. Even the experts (and there are few of these) have difficulties debugging synchronization errors or designing them smoothly in the first place.

    But I agree that my gut feeling is that neural networks and brain simulation will decompose relatively well. Our neurons fire independently and without any central synchronization mechanism so it seems probable that simulations will be able to take advantage of this.

  • Chris

    And this applies with a particular vengeance to the brain, since we have every reason to expect that, unlike the genome, the components of the brain are highly interdependent and nonlinear. Once you sequence a gene, you know what the sequence is, regardless of what’s going on elsewhere. We may well have to understand the brain fully in order to understand even a part of it.

    I think understanding the human brain in the contexts of other brains may help there. Once we understand the brain of a rat, we will be closer to understanding the brain of a human.

    Right now we don’t even understand the brain of a tapeworm, so I’m inclined to agree with the “centuries if ever” forecasts…

    Also, there’s an implicit assumption that more brainpower will produce *different* (better) results, and not just produce the same results more quickly. Theoretically an immortal human could play perfect chess; mortal ones just don’t have *time*. There’s a point beyond which a faster chess player *isn’t* better, it’s just faster to make the exact same moves, because they are the perfect moves for that situation. (A totally impractical point with current technology, but still.)

    On the other hand, I wonder if it’s possible to design AIs without some of the systematic flaws of the human brain, like being really bad at probability, or adopting the intentional stance towards the weather and deducing thunder-gods, or judging ideas by the person propounding them (do they have respectable social status, are they from a hostile tribe, etc.) Not so much trying to improve the intelligence of humans, but trying to eliminate some of our besetting stupidities.

  • Alex Weaver

    I don’t know; I think the ability to combine and synthesize the data from experiments carried out by a large number of robots in automated labs would create the opportunity for the system to gather and process data much faster than a human. That’s dependent on us getting the artifical consciousness thing to work, though.

  • http://deconbible.blogspot.com bbk

    Again, chess is a limited problem domain. All the possible moves, and their outcomes, are precisely knowable.

    The game of chess is thought to be solvable, like tic tac toe, on the basis of empirical evidence (i.e. white wins more often than black). But the number of possible moves in a game of chess are estimated to be more than the number of particles in the universe. And these possibilities would have to be computed by the computer at each move (minus the moves that have already been made, of course). So much for it being a limited domain problem. Needless to say, this is not how computers play chess. The only thing that lets computers beat humans at chess is that computers can play chess better than humans, but it doesn’t mean they play it perfectly.

  • He Who Invents Himself

    bbk:

    Chess is a limited domain. Not limited by volume, but by dimensions. Like Ebonmuse said, everything is properly defined in chess, and the number of possible moves one has at any one moment are dwarfed by what our brains must face every minute. In fact, chess enthusiasts and game theorists readily remark how astounding it is that chess produces so many possible games and yet is so simple.

    Also, there’s some inquiry to whether Deep Blue was really playing alone. Kasparov has even made a documentary defending his case of why DB was extremely suspicious. Relevant scientists know that a computer and a human working together in chess can perform astronomically better than either could alone (I wonder why?), and Kasparov suggests this as a possibility. Anyway, chess programs are barely intelligent at all – they just use their computational advantage to brute force things. Brute force canNOT be applied to the real world. The real world is itself so much more complex than any unit studying it that the unit must use intelligence rather than hard calculation to figure it out. The chess program analogy fails to point to the singularity.

    Saying that reverse engineering consciousness is an incredibly difficult problem is not limiting one’s imagination; it is being realistic. Exploring consciousness, or intelligence, is a different matter than sequencing a genome or computing possible moves (which are straightforward problems). Consciousness is arrived at with specific ways of computing (at least), of which we have yet to make sense of.

    If we could make an AI which was sufficiently qualitatively similar to human cognition, I actually do believe we could make better-than-human AI’s just by quantitatively increasing its thinking power. But to arrive at an AI which resembles us enough for this to work would involve much more knowledge of the brain and the mind.

  • lpetrich

    I wouldn’t say that it’s flattened out, just slowing down. I suspect that we’ll see developments in other directions, like lower power consumption and having more cores and cache per chip.

    Parallel computing has been done for years on supercomputers, and a simple form of it, Single Instruction Multiple Data (SIMD) is now widely used in desktop CPU’s and video cards. Multicore CPU chips will make Multiple Instruction Multiple Data (MIMD) parallelism more common, with its greater flexibility and greater difficulty in programming.

    Programming for parallelism gets the most success when a problem can easily be divided into subproblems that can be worked on in isolation from each other, especially subproblems with the same operations on some data. And attempting to approximate that state of affairs with some appropriate algorithm design has long been a big challenge; some problems are easier to parallelize than others.

    But I think that the worst problem with the technological singularity is artificial intelligence.

    Despite a LOT of effort, that is far behind the more optimistic predictions of decades past — and continues to be. In fact, the poor progress of AI is one of the great disappointments of my life. :(

  • http://neuraltransmission.wordpress.com Neural Transmissions

    No matter how fast you can think, you’re still intelligent only insofar as you possess accurate information about the external world. The best learning algorithm in the multiverse can still only learn the facts it’s presented with, and as I said, that means using the same slow, painstaking scientific method that human beings use now.

    First, again, Singularitarians are aware of this, which is why most of them advocate some form of mentoring. Ben Goertzel, for example, advocates training in a VR environment initially, then progressing the AIs to the real world.

    Your objections are already well addressed by the Singularitarian community, if you do the reading.

    Furthermore, enhanced intelligences may be able to design better tools and methods for data gathering, thus accelerating the process.

  • http://deconbible.blogspot.com bbk

    What you aren’t seeing is the incredible amount of work that went into building that system and keep it synchronized.

    I see it every day, to be perfectly honest with you. When I hear someone say that synchronization is difficult on a typical Windows server, I think of a few applications I had to debug where a database was built with triggers taking the place of foreign keys. It’s the kind of stuff that makes you want to go back for an MBA instead of your CS degree. But I have to conclude that much of the difficulty is sheer idiocy and failure to read the owner’s manual. I don’t think it’s really that hard.

    It sounds like you’re starting from the position that we can successfully decompose any problem into many parallel components and we need evidence to show this isn’t the case.

    I’m not, and I did say that some problems cannot be reduced, although since we agree on the larger point, it doesn’t matter. Actually, the problems that cannot be reduced are what fascinates me the most. I think multi-core processors are mostly a waste of real estate. Instead of 4 of the same exact cores, I’d like to see 4 highly specialized machines designed for specific types of work. I’d also like to see the commercial emergence of things such as block-structured ISA’s which offer instruction level parallelism, more sophisticated branch prediction, and things such as specialized helper threads built into the micro-architecture. There are actually people working on these architectures for 10 years out from now, depending on Moore’s law to make enough transistors available on a chip to implement these techniques. These types of things will allow for huge leaps forward in compiler design and new software techniques even for non-divisible problems (assuming that the programmers read the manual…) So I’m very hopeful that we’re not just looking at a future where no one has any better ideas than to just throw more and more cores at the problem hoping that someone else will be able to make use of them.

  • http://deconbible.blogspot.com bbk

    Yes, the end consumer can buy software that is multi-threaded and can scale to many cores.

    Yes but lest you’re forgetting, the vast majority of end-user programming can then leverage this core software. There’s hardly a web server on the internet that doesn’t have a database engine installed on it. And the latest rage seems to be “cloud” computing which is really nothing more than a cluster of virtual machine servers – what better way to take advantage of multiple cores than running multiple operating systems on them at the same time? I’m not sure exactly who said it, but the saying goes something like there aren’t too many computing problems that can’t be solved by adding another level of indirection.

  • Entomologista

    And here I thought the most pressing question in robotics was “Can you have sex with it?”

    Anyway, what about quantum computing? Admittedly, my knowledge of this subject is limited to crappy sci-fi novels. But it seems like that might potential to produce intelligence. Assuming we ever get it to work.

  • paradoctor

    All celebrations of Moore’s Law ought to take into account its nemesis, which I call Gates’s Law: that software doubles in bulk and slowness every two years. In consequence, the start-up time of computers and programs has stayed constant over many computer generations; a phemonenon which I call the Constancy of the Cyber-Seista.

    Moore’s Law depends on miniaturization, which has limits; but Gates’s Law depends on inefficiency, which is limitless. Therefore eventually Gates’s Law will win. Instead of a Singularity, you will get an Equilibrium.

    And even assuming, optimistically, that it’s a _high_ Equilibrium, then note one more difficulty: our cyber-genius will probably get _all_ of its information from the Internet! ;)

  • paradoctor

    An addendum re the Constancy of the Cyber-Siesta.

    Show me a petaflop machine, and I will show you an operating system that needs a quadrillion floating-point operations to boot up.

  • paradoctor

    … excuse me, that should be _sixty_ quadrillion floating point operations!

  • http://www.atheistnexus.org/profile/SteveJohnson Steve

    To all who claim that chess prowess indicates AI progress: AI researchers have been working on Go-playing programs for literally decades, and none have advanced past the amateur level. Look up “computer Go” in Google to find some good links.

    Go translates more directly to real-world experience and human-like thinking. We have seen little progress, despite programs that take advantage of advanced paradigms (neural networks, parallel processing). The best Go-playing programs rely on huge libraries of stored scenarios generated by human players.

    I will not state my position on the Singularity in this post. I just wanted to get this little pseudo-correction out there.

  • Christopher

    I’m no computer scientist, but I do know my history quite well: there were nay-sayers for just about every major technological and social movement that has ever come into fruition – conventional wisdom of past eras decried such things as the ability to control lightning, construct heavier-than-air flying machines, connect two seas via artificial waterways, allowing people of other “races” (a stupid concept if you ask me…) to have legal recognition with the dominant ethic group and keeping the institutions of church and state separate (now many like myself are just waiting for those institutions to die).

    I don’t know if the singularity will happen in the next couple of decades (although I hold out hope – as it will make the existing social order obsolete!), but I do know better than to simply say “never!” because certain ideas have hit a speed bump on their path. You just never know what’s possible until it happens…

  • bestonnet

    Remember Amara’s law:

    We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

    In the short term it will probably be harder and take longer for the technologies that transhumanism is based on to be developed and for the goals of transhumanism to be reached but in the long term things will probably be changed a lot more than we expect (or at least that seems to be how new technologies have worked in the past).

    The big problem though is that we can’t actually say how long it will take us to get AI since we haven’t developed it yet, we might continue fumbling around for the next century getting essentially no where or someone might come up with a breakthrough today that solves the problem (and for all we know it could just a matter of getting enough neurons, no attempt at AI has even come close to the complexity of a human brain).

    But anyway, http://ieet.org/index.php/IEET/more/2181/ is interesting.

    EbonMuse:

    This is exactly the point I’m making: Singularitarians believe that greater and greater intelligence can be achieved simply by improving the design of the brain. I explained in my post why this is incorrect. To assume that faster processors necessarily equate to superior understanding of the external world is just belief in oracles in another form. It’s magical thinking, and it’s no less magical if we transfer the locus of competence from hardware to “learning algorithms”.

    I have heard some speculation that with enough processing power human level AIs might be accidentally created (a concern given that there’s no way of knowing what a randomly created AI would be like).

    Hardware isn’t likely to be much of an issue for AI anyway, Moore’s law will take care of that just fine if we wait long enough (multiple processors should be workable for AI, human brains are already massively parallel), software is another matter though and will probably be quite hard to solve (I do think it will take more than just getting enough neurons, at least if you want a sane AI).

    Chris:

    Also, there’s an implicit assumption that more brainpower will produce *different* (better) results, and not just produce the same results more quickly. Theoretically an immortal human could play perfect chess; mortal ones just don’t have *time*. There’s a point beyond which a faster chess player *isn’t* better, it’s just faster to make the exact same moves, because they are the perfect moves for that situation. (A totally impractical point with current technology, but still.)

    Well we are qualitatively different from less intelligent creatures so there is precedent indicating that it may produce different results.

    Then again, would an ancestor of modern humans have predicted this world or anything like it?

  • http://deconbible.blogspot.com bbk

    Chess is a limited domain. Not limited by volume, but by dimensions. Like Ebonmuse said, everything is properly defined in chess, and the number of possible moves one has at any one moment are dwarfed by what our brains must face every minute. In fact, chess enthusiasts and game theorists readily remark how astounding it is that chess produces so many possible games and yet is so simple.

    Chess is essentially a simple tree traversal problem that can be solved recursively. And the brain is a state machine. Therein lies the difference. Good human players typically think 3 moves ahead of their next move. Good computer players typically think many moves ahead. But we can double and re-double the entire computing power in the world many times over and over and yet after a certain point we’ll still be stuck not being able to compute a single move further than we had before. You could, actually, just solve the game of chess once. That would be great. You could just store every possible move in memory and essentially turn the problem into a state machine. But to do this, you would need to be able to turn every photon, electron, quark, and everything else in the universe into a logic gate for your machine. And that in a nutshell is the problem with chess.

    A human brain only has a limited number of nodes. All that is required is for each node to be represented, related to its neighbor nodes, and allowed to function in parallel to the other nodes. This takes vastly fewer physical resources to represent than the solution to a chess game. We know it’s possible to represent the brain as a state machine, well, because nature already did it. Now I didn’t say it was going to be easy. As Adrian and I discussed, one of the most difficult aspects of running a machine like the brain in virtual space could be keeping the whole thing synchronized, at least if it depends on being synchronous. But the point is, it’s a computer.

    I’ve had this argument many times. I’ve even had this argument with neuroscience students. Usually by the time I’m done, people will have even resorted to claiming that for all we know, the brain relies on quantum entanglement, multiple dimensions, intra-neuron memory, you name it. I’m just going by Occam’s razor here. A state machine built with simple components is probably what we’re looking at.

  • http://www.bellatorus.com Petrucio

    Pointing out that Singularity has a feel of religion to it is just a red herring, it does nothing to address the principles behind the idea.

    The idea is based on something that has been happening for 14 billion years and does not show signs of stopping, not just on wishful thinking like religion. The law of accelerating returns certainly does not apply only to Moore’s law; it’s only the latest paradigm, and I see no reason to believe that it’s the last.

    Also, transhumanism DOES have a degree certain wishful thinking behind it, but I do not think the singularity implies it. It’s a likely consequence, but not a certain one. The end of all humans is another possible consequence, who knows. The whole point of the Singularity is that there’s little point in trying to guess what comes after it.

    Also, knowledge of the functioning of the human brain is certainly not a pre-requisite to a singularity, although it’s possibly the easiest path. But with genetic algorithms we are already making programs that solve complex problems using solutions that are way beyond the creativity of it’s programmers, so you have another false premise there.

    Your logic seems to me like the ones that never thought a computer could win against a master chess player. So far, the evidence does not point to a dead end where singularity is not possible. Whether it seems like a religion or not, I do not care. What I do care is that there’s reason enough for it to be very plausible indeed. Even if ‘plausible’ means something like 10%, that’s an amazing food for thought.

    Either way, it’s a great time to be alive. So I think.

  • bestonnet

    paradoctor:

    All celebrations of Moore’s Law ought to take into account its nemesis, which I call Gates’s Law: that software doubles in bulk and slowness every two years.

    Even Microsoft software is less bloated in relation to how much storage capacity computers have than it is used to be despite adding a whole heap of bloat (much of it worthless).

    paradoctor:

    In consequence, the start-up time of computers and programs has stayed constant over many computer generations; a phemonenon which I call the Constancy of the Cyber-Seista.

    Though we can do a lot more with our current computers than we could with older hardware, even with bloat, there’s a good chance that AI will be like that.

    paradoctor:

    Moore’s Law depends on miniaturization, which has limits; but Gates’s Law depends on inefficiency, which is limitless. Therefore eventually Gates’s Law will win. Instead of a Singularity, you will get an Equilibrium.

    The limits of Moore’s law are a long way away and it is possible to get some more efficiency out of bloated code (in fact much software bloat is a result of Moore’s law making efficiency improvements in most software a waste of time).

    We’re not sure what the laws of physics will allow us to do but there is still plenty of scope for increased intelligence.

  • bestonnet

    bbk:

    I’ve had this argument many times. I’ve even had this argument with neuroscience students. Usually by the time I’m done, people will have even resorted to claiming that for all we know, the brain relies on quantum entanglement, multiple dimensions, intra-neuron memory, you name it. I’m just going by Occam’s razor here. A state machine built with simple components is probably what we’re looking at.

    http://www.mth.kcl.ac.uk/~streater/stapp.html debunks one particular attempt.

    Even if those things were true and required for intelligence they still wouldn’t be show stoppers for AI, just things we’d have to emulate.

  • http://www.casehq.blogspot.com CASE

    This would be lovely, the problem is less to do with computation power and more to do with computational structure. Human brains work (occassionally) because they have a hardwired structure that allows for more than just a binary response. In order for computers to have the same power, they need to stack up their transistors (IE 4 transistors may be required where only 1 neural cell is used). We havent really done anything with tertiary transistors because they dont really work with silicon chips (IE the PNP and NPN transistor is great for a single gateway (1|0), but how do you miniaturise a third choice?).

    Thus, we are a long way away – nevertheless, it could happen one day.

  • Valhar2000

    But I reject the notion that, as general-purpose intelligences, they will ever be able to far surpass the kind of understanding that any educated person already possesses.

    Why, Ebonmuse? What magical property do human brains have that machines will never ever be able to emulate? Did you bang your head and become a dualist lately?

  • Valhar2000

    After thinking it over, I’d like to apologize for the tone of my last comment; it was too snarky.

    Nonetheless, my point stands. This post appears to be laced with mysticism, to be honest. Perhaps I have just misunderstood it, but it seems that some of your objections have a dualist nature to them, which is something I find unaceptable, and, given your other posts on the subject, so do you, most likely.

  • http://www.daylightatheism.org Ebonmuse

    I don’t think you read this post very carefully, Valhar. I’m not denying the possibility of artificial intelligence per se. In fact I’m almost certain it will be created eventually. I’m also not denying that an AI could improve upon human cognition in some ways (although I suspect that using some cognitive shortcuts will be a necessity for any self-aware intelligence).

    What I deny is the unfathomable, godlike intelligences that are a staple of the Singularity vision, the ones that can understand and predict the world in ways that human beings will never be able to comprehend. I doubt this is possible for the same reason I doubt Prediction Machines: because improved cognitive capacity alone does not equal greater understanding of the world, unless you possess correspondingly greater information. And our ability to collect accurate information about the world is limited by the fundamentally chaotic nature of many complex real-world systems.

  • Bill Sheehan

    I, for one, welcome our new cybernetic overlords.

    Well, *somebody* had to say it… :-)

  • Joffan

    It’s interesting to observe that, from my viewpoint, both sides of the debate have a feel of religion to them, and the reason is fairly simple: because we’re talking about the future, which is (as yet) unknown.

    Singularity “believers” assert that progress in hardware, software, data organization and interfaces builds up as a synthesis to something more than a qunatitative shift. Non-singularity “believers” assert that more of the same will not produce anything different.

    These are not really beliefs in the religious sense, though. I think that is a false comparison. These are opinions, on both sides of the debate, and mostly the opinion-holders are perfectly comfortable with agreeing that they are speculation.

    On the whole I’d see more problems with the non-singularity side of the argument, since the quantitative difference between our brains and (say) squirrel brains has led to a qualitative difference beyond a better ability to catalog the nut store. So although I’m not convinced that the singularity as such will be exactly as foretold (although of course there are a range of opinions on that too), I think a qualitative shift in the thinking ability of computers will come, and the future beyond that is hard indeed to predict.

  • http://www.eunomiac.com Eunomiac

    I’m not a transhumanist, though after re-reading my comment, I can’t entirely fault you if you don’t believe me! ;-) I certainly don’t think that transhumanists have met their burden of proof on the issue of the Singularity. That being said, I often find myself defending the Singularity because I think skeptics often go too far in dismissing it. There’s nothing intrinsically wrong with the idea; there’s no reason it couldn’t be entirely accurate. There’s just insufficient evidence to go any further.

    The Singularity, like more than a few other transhumanist ideas, has more than a whiff of religious faith about it: the messianic and the apocalyptic, made possible by technology. History has a way of foiling our expectations. The number of people who have confidently predicted the future and have been proven completely wrong is too great to count, and so far the only consistently true prediction about the future is that it won’t be like anything that any of us have imagined.

    I grow particularly defensive when transhumanism is equated to a religion. It’s a conceptual bait-and-switch. Yes, transhumanism and religion share a metaphysical tone and a prophetic outlook. This is why the comparison is superficially appealing to a critic — it “feels right.” However, such a comparison implies a discrediting of transhumanism because of its similarities to religions, and we don’t condemn religion because of its tone or its claims about the future. Religion is bad because it is dogmatic and rigid, has a loose relationship with evidence, etc. These are characteristics that transhumanism doesn’t share. Comparing the two as you have done may pass technical muster, but the unspoken thrust of the argument is unfair.

    The largest immediate obstacle I see to Singularity scenarios is that we don’t yet understand the underlying basis of intelligence in anything close to the level of detail necessary to recreate it in silicon. Some of the more hopeful believers predict a Singularity within thirty years, but I think such forecasts are wildly over-optimistic. The brain is a vast and extremely intricate system, far more complex than anything else we have ever studied, and our understanding of how it functions is embryonic at best.

    Ray Kurzweil’s book, “The Singularity Is Near,” is a surprisingly dry read for such an interesting topic — because it’s six hundred pages of carefully presented data. Entire chapters are devoted to estimating the complexity of the brain. The important point that tends to escape such claims about the Singularity is the implication of exponential growth rates on error bars. You can have an error bar that stretches across four or five orders of magnitude and still come down within a reasonable time frame. And Kurzweil is careful to consider extreme estimates of brain complexity.

    While I grant you that predictions of the future have often been wrong, our track record improves markedly for predictions based on mathematically measurable growth (instead of qualitative predictions about new technologies). There really shouldn’t be all this resistance to a prediction that’s (1) based on a mathematical growth curve with plenty of data points, and (2) whose estimated end point has been multiplied a hundred-thousandfold “just to be safe.”

    Before we can reproduce consciousness, we need to reverse-engineer it, and that endeavor will dwarf any other scientific inquiry ever undertaken by humanity.

    Like the Human Genome Project, you mean? The details of the HGP are quite instructive to anyone still failing to appreciate the power of exponential advances in computation.

    So far we haven’t even grasped the full scope of the problem, much less outlined the principles a solution would have to take. Depending on progress in the neurological sciences, I could see it happening in a hundred years – I doubt much before that.

    Let me offer you a sense of the error bar stuff I mentioned above. Kurzweil estimates that functional human brain simulation will require approximately 10^16 computing flops, and our best supercomputers are almost there (10^15, and we’ve simulated — or are about to simulate — the whole visual cortex). But even if he’s off by five orders of magnitude, the delay is only around 15 years — from 2013 to 2025. To give a sense of how far off such an error would have to be, it’s like mistaking a mouse brain for a human brain (same difference in complexity); only ten orders of magnitude separate modeling one human brain from modeling every brain of every human being that has ever lived.

    Now, I am not saying that he HASN’T made such mistakes. They are, however, huge mistakes. We have no reason at all to assume these colossal errors are more likely than the Singularity itself, except for the same brand of intuitive “pfft-like-that’ll-ever-happen” cynicism we heard at the start of the Human Genome Project.

    Current supercomputers operate at about 10^15 flops. Exponential growth has us at 10^60 flops by 2100. With paradigm shifts coming down the pipes, again there is no reason to reject these figures out of hand — they’re the best estimates we have, after all. But even if we’re off by 20 orders of magnitude (!), that still puts those computers at 10^15 times the total estimated brain power of every human being that has ever lived. If my back-of-the-envelope calculations are accurate, the computers of 2100 AD would not only be capable of simulating every human thought that has ever been thought… they could do this a million times a second. And that’s if we’re off by 20 orders of magnitude: Our current estimates have those computers running 100,000,000,000,000,000,000 times faster.

    If you find all of this to be absurd, then you’ve either rejected exponential computer development out of hand, or you don’t fully understand its implications. My advice to everyone is, don’t lightly dismiss the next ninety years of explosive development. If exponential growth continues (and, unlike population, we have reasons to suspect that it will) 2100 AD will be more than simply unpredictable: it will be incomprehensible to us today.

    Forgive me if the following sounds patronizing; I only do so because I’ve “been there,” and feel that I have something to offer dyed-in-the-wool skeptics. I think the big ‘consciousness raiser’ that needs to happen is identifying your resistance to these ideas as nothing more than incredulity. There’s no rational, empirical reason to doubt these numbers any more than you’d doubt any other number.

    The largest unexamined assumption of Singularity believers is that faster hardware will necessarily lead to more intelligent machines, so that all that’s required to create a godlike intelligence is to fit more and more transistors on a chip. In response, I ask a simple question: What makes you believe the mere accumulation of processing power will produce greater understanding of the world?

    We can (or will soon be able to) see the brain in microscopic detail. A sufficiently powerful computer could simply model a whole brain, just as it appears. Such a simulation should work, as long as we know how its elementary parts (synapses, neurons) behave. Then we simply have to reproduce a brain in simulation, make sure all of those neurons and synapses behave properly, and see what happens. In fact, failure might be more interesting than success: if we have a functionally identical human brain simulated in a computer environment, firing neurons in the same beautiful poetry we see on MRIs, and we don’t get a human personality, then what will that mean? It’s the ultimate test of dualism vs. materialism.

    And the process of collecting data about the world cannot be accelerated to arbitrary rates.

    I disagree. There are a host of possibilities. First, one of the most promising ideas may be uploading fully mature minds into a computer, simply scanning and copying neural architecture. Uploading the brains of all our greatest thinkers could allow them to network their thoughts, accelerate their analysis to computational speeds, and beat fatigue or any other shortcomings of the human condition.

    Second, in the context of wholly artificial intelligences, we have to remember that all of our other information-based technologies will be accelerating, too. Virtual simulations would be one way of vastly increasing the relative speed of collecting data — just run an AI through a forty-year “education” simulation in a few seconds of real-time, and you’ve got an AI with a graduate degree.

    These aren’t predictions of mine, of course; the whole point of the Singularity is that nothing is predictable. I’m simply disputing your predictions by offering counter-examples.

    But I reject the notion that, as general-purpose intelligences, they will ever be able to far surpass the kind of understanding that any educated person already possesses.

    “I think there is a world market for maybe five computers.” — Thomas Watson (1874-1956), Chairman of IBM, 1943

    “Who the hell wants to hear actors talk?” — H. M. Warner (1881-1958), founder of Warner Brothers, in 1927

    “Heavier-than-air flying machines are impossible.” — Lord Kelvin, President, Royal Society, 1895

    “Everything that can be invented has been invented.” — Charles H. Duell, Commissioner, U.S. Office of Patents, 1899

    “Inventions reached their limit long ago, and I see no hope for further development.” — Julius Frontinus, 1st century A.D.

    “There is no reason anyone would want a computer in their home.” — Ken Olson, president, chairman and founder of Digital Equipment Corp., 1977

    “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.” — Western Union internal memo, 1876.

    “The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?” — David Sarnoff’s associates in response to his urgings for investment in the radio in the 1920s.

    “Space travel is bunk.” — Sir Harold Spencer Jones, Astronomer Royal of Britain, 1957, two weeks before the launch of Sputnik

    “All attempts at artificial aviation are not only dangerous to life but doomed to failure from an engineering standpoint.” — editor of ‘The Times’ of London, 1905

    “640KB ought to be enough for anybody.” — Bill Gates (1955-), in 1981

    “Drill for oil? You mean drill into the ground to try and find oil? You’re crazy.” — Drillers who Edwin L. Drake tried to enlist to his project to drill for oil in 1859.

    “I confess that in 1901, I said to my brother Orville that man would not fly for fifty years . . . Ever since, I have distrusted myself and avoided all predictions.” — Wilbur Wright, 1908

    “Airplanes are interesting toys but of no military value.” — Marechal Ferdinand Foch, Professor of Strategy, Ecole Superieure de Guerre

    “The abdomen, the chest, and the brain will forever be shut from the intrusion of the wise and humane surgeon”. — Sir John Eric Ericksen, British surgeon, appointed Surgeon- Extraordinary to Queen Victoria 1873

    “You would make a ship sail against the winds and currents by lighting a bonfire under her deck…I have no time for such nonsense.” — Napoleon, commenting on Fulton’s Steamship

    “Computers in the future may weigh no more than 1.5 tons.” — Popular Mechanics, forecasting the relentless march of science, 1949

    “Man will never reach the moon regardless of all future scientific advances.” — Dr. Lee De Forest, inventor of the Audion tube and a father of radio, 25 February, 1967.

    “The aeroplane will never fly.” — Lord Haldane, Minister of War, Britain, 1907

    “I suppose we shall soon travel by air-vessels; make air instead of sea voyages; and at length find our way to the moon, in spite of the want of atmosphere.” — Lord Byron, 1882

  • http://www.eunomiac.com Eunomiac

    Just a post-script on the data collection bottleneck idea.

    Early on, data collection won’t be a problem, since the new superintelligence will have more than enough to work with in the data that we have already collected. Experiments, the internet and media could be absorbed and integrated swiftly. I mentioned above uploading the minds of great thinkers. Coherentist theories of epistemology suggest you can learn a lot from fully integrating all your knowledge. So, there should be a huge benefit to combining and categorizing such data, sifting them for contradictions, and shaking out new patterns and new implications that our inferior and disconnected intelligences were unable to discern. “Learning algorithms,” “education simulators” and the like would all help in getting a superintelligence up to speed. There is no magical thinking here; everything I’ve described is consistent with reasonable predictions.

    This would take a superintelligence up to the point where it has exhausted current stores of data, and must now seek out more on its own. I do see your point that computer speeds do not equate to data collection speeds; no matter how good Phoenix’s computer is, the little guy is still limited by how long it takes his arm to move the sample into the oven. However, at this high of a level, the superintelligence can turn all that it knows to the task of improving its ability to learn more. The key at this stage is to establish feedback loops. Maybe it simulates a billion worlds to test a billion possible solutions through a billion evolutionary paths, accelerated a trillion times. Hell, it’s primary mode of cognition could be such evolutionary simulations, so powerful might its brainpower be.

    Once again, the caveat: I have no reason to ‘predict’ that this will happen. But I’m not the one making predictions here — I’m simply defending possibilities.

  • http://deconbible.blogspot.com bbk

    CASE, the problem you’re alluding to is actually much more fundamental than anything having to do with the human brain. I have on my desk here at work a copy of The Hacker’s Delight by Henry Warren, which off the top of my head has a featured problem where the programmer is asked what kind of machine is best to build given that 1) you have a certain sized problem you want to solve and 2) you have a choice of registers that can represent 2, 3, or 4 states. The problem is an optimization problem solved through mathematical analysis. Choosing the best way to represent the human brain in software versus in hardware is likewise a rather simple optimization problem. Multiple states for logical constructs within the brain can simply be represented by creating a level of indirection in the software which runs on optimally performing hardware.

  • He Who Invents Himself

    bbk:

    State machines have an inflexible function that does not change. The brain is something that changes, and must even be able to sustain considerable damage to its wetware. That wetware is always changing as we learn. Our cognition can develop, whereas a FSM is unchanging. Our minds are very remarkable for being creative, which does not work well in FSM’s, as well as for being intelligent. Intelligence involves figuring out answers when one didn’t have it already. We have the ability to create new ways of understanding as well as mentally representing the world, and I doubt a FSM can do that.

    Steve:

    Good point about the game of Go. I forgot about that. Go is even simpler than chess, but it allows a load of possible moves at every turn. It has between 361! and 361^(3^361) possible games without repeating positions, and that is astronomically bigger than Shannon’s number 10^120. So programs must act more intelligently than just brute forcing, and this comes as a problem to programmers. Even in as simple of a game as Go, we are finding it difficult to develop intelligence, while we ourselves can play it and develop strategies on the go. Even amateurs can beat our best software right now.

    Joffan:

    A skeptic is not necessarily a believer. That differentiates singularity believers from vocal nonbelievers.

    The only relation to religion I see here is that there is a grandiose claim and many bright-eyed “followers” akin to Sci-Fi enthusiasts. That is not to say there aren’t individuals very knowledgeable on the subject with reasoned opinions.

    Lastly, I fail to see how quantitative change yields significant qualitative change. If that’s the case, then the brain must be a very exotic piece of machinery, which once again gets us to the point that we have to reverse engineer it or hope some learning algorithm will learn how to be as intelligent as we are.

  • http://deconbible.blogspot.com bbk

    We have the ability to create new ways of understanding as well as mentally representing the world, and I doubt a FSM can do that.

    This may be the third or fourth time I’m bringing this up in this single thread, but, the problem with this reasoning is a failure to see the problems that state machines can solve by increasing a level of indirection. This same exact analysis that discounts state machines for any use involving creativity and recoverability would have to rule out everything from multi-tasking operating systems that run even on single-threaded hardware, self-modifying algorithms (software), re-programmable logic gates (hardware), and other such constructs that have been achieved using nothing but fixed state machines. As it stands, fixed state machines are used to simulate everything from fuzzy logic to neural networks. How is the brain different that an additional level of indirection cannot be designed into a computer to handle its specific computational needs?

  • http://deconbible.blogspot.com bbk

    Eunomiac… I tried to be civil, but, your post has been a complete smack-down and I wish I could have put it together as you have. I’ve read Kurzweil’s book and in fact bought a copy for a friend of mine who has a neuroscience background after I got tired of arguing with him. As far as being a book driven by data – there’s a graph on just about every other page. While it’s not quite on the level of the theory of evolution, you couldn’t be any more spot on in pointing out that singularity theories are driven by actual analysis of available data.

  • Chase Johnson

    I’m not a transhumanist, nor a firm believer in the Singularity, but I do think they are interesting ideas. I take issue with the dismissal of possibilities by Ebonmuse and others here. We literally have no idea where computing could go, and to argue that Moore’s law flattening has broad implications for the future of computer power is short sighted.

    I am an electrical engineering student, and we learn about many designs and concepts which are, practically speaking, the best we can possibly do. For example, the design of static RAM elements; there is one, single design which is the best we can create, using 6 circuit elements. You cannot effectively make an SRAM element smaller or faster than that, so better SRAM is dependent on the slow, steady improvements of the chemistry and physics of creating silicon chips. It seems unlikely that the silicon process will experience any great jumps, so as an engineer, am I doomed to merely make everything a little bit better every year?

    It seems not. Just a few months ago an article was published in Nature about the invention of the memristor, which is a new passive circuit component. Thats a tremendous invention! The only other passive circuit elements are the resistor, capacitor, and inductor, which have been around for centuries. That there is a new such component is amazing. And the properties of this device are amazing as well. A single memristor could emulate an SRAM cell in a tiny fraction of the space, with much better speed to boot. This thing will uproot the entire construction of computing devices.

    Such an advance was completely unpredictable. And that’s the point. If we let history be our guide, almost all advances are completely unpredictable before they happen. Now, its possible that this will stop; that we will reach an equilibrium with all knowledge in the universe. But it seems unlikely, and I doubt any here will argue that we are approaching knowledge of everything.

    Personally, I don’t care much whether or not some specific predictions about the existence or nonexistence of the Singularity (or the Jetsons, or Star Trek) come to be. I can’t imagine what the future will be like, but it’ll be damn cool, and I intend to help build it. You (all of you) can’t imagine what the future will be like either, so stop assuming it’ll be just like now. Eunomiac has demonstrated how insightful that attitude always turns out, after the fact.

  • random guy

    Actually the primary concern I have with the singularity is philosophical, not technical. Singularity supporters tend to assume that these god-like machines will create new technology for OUR benefit. Resulting in some eden-esque paradise where technology, to paraphrase Arthur C. Clarke, is so advanced it is indistinguishable from magic.

    I think a more accurate picture is that these beings would care little for the needs of mankind, our methods of communication and capacity of understanding being so limited in comparison to their own. I think they would view us as little more than evolutionary predecessors, a necessary but ultimately useless intermediate step between organic life and themselves. All humans are the descendants of early mammals about the size and shape of mice. For the large part we leave mice alone so long as they stay in their natural habitat. Occasionally we experiment on them for our benefit. If they invade our homes, they are treated as pests and eliminated. We keep a few as pets. But for the large part the well-being of mice do not concern man, so why should man think that a sufficiently superior intelligence would care for our well-being?

    I the days before the singularity I could see great growth for human kind, as the machines are still dependent upon our power supplies and our economy for extracting, refining, and proccesing raw materials. But as intelligent machines begin to replace human controlled sectors, there would be less and less reasons to do as we tell them to. As Agent Smith said in the Matrix, “I say it was the end of your world, because once we began running things, it really became our world.” I’m not saying these machines would necessarily seek our destruction, just that once they develop self-sufficiency we would no longer be a concern to them. We would be downgraded to the status of field mice, being unable to comprehend or keep up with their technological advances.

    All of this of course assumes that such a thing can actually happen, but I’m with ebon in saying that there is no reason to assume that a more intelligent machine would be that much better at developing technology than humans. Math equations sure, computer programs sure, generating hypothesis sure, but anything that requires an actual investigation into the real world will necessarily be slowed down by how fast one can interact with it. On that level super-intelligent machines would have little advantage, if any, over human investigations.

  • http://deconbible.blogspot.com bbk

    Many of you may be interested in the article that appeared today on MIT’s Technology Review website.
    http://www.technologyreview.com/Biotech/21042/?a=f

    This just goes to show how improvements in scientific tools are shedding light on problems which were previously thought unknowable. In the past, humans did not understand anything about the world and so they invented God to explain it. Today, naysayers still want to put anything that we may not understand today into a mysterious category of unfathomable things.

    quote:

    The first high-resolution map of the human cortical network reveals that the brain has its own version of Grand Central Station, a central hub that is structurally connected to many other parts of the brain. Scientists generated the map using a new type of brain imaging known as diffusion imaging.

    “The fact that such a core exists gives rise to many questions we can now ask about it,” says Olaf Sporns, a neuroscientist at Indiana University, in Bloomington, and senior author of the study, published this week in PLoS Biology. “What goes on there? And how is it involved in passing messages between different parts of the brain?”

  • bestonnet

    CASE:

    This would be lovely, the problem is less to do with computation power and more to do with computational structure. Human brains work (occassionally) because they have a hardwired structure that allows for more than just a binary response. In order for computers to have the same power, they need to stack up their transistors (IE 4 transistors may be required where only 1 neural cell is used). We havent really done anything with tertiary transistors because they dont really work with silicon chips (IE the PNP and NPN transistor is great for a single gateway (1|0), but how do you miniaturise a third choice?).

    Thus, we are a long way away – nevertheless, it could happen one day.

    Ternary has theoretical advantages over binary (and everything but base e) but in actual practice it is easier to build binary computers so that’s what we use.

    If it turned out that Ternary or something else were a better way of running an AI then we could easily just emulate a Ternary computer to run it on (and we’ll eventually have cycles to spare).

    Valhar2000:

    Why, Ebonmuse? What magical property do human brains have that machines will never ever be able to emulate? Did you bang your head and become a dualist lately?

    After thinking it over, I’d like to apologize for the tone of my last comment; it was too snarky.

    Nonetheless, my point stands. This post appears to be laced with mysticism, to be honest. Perhaps I have just misunderstood it, but it seems that some of your objections have a dualist nature to them, which is something I find unaceptable, and, given your other posts on the subject, so do you, most likely.

    Pretty much the only way that AI couldn’t happen would be if dualism were correct which does mean that objections to the singularity have to be based on AI not being so smart.

    EbonMuse:

    What I deny is the unfathomable, godlike intelligences that are a staple of the Singularity vision, the ones that can understand and predict the world in ways that human beings will never be able to comprehend.

    We can predict the world is ways that a Mice will never be able to comprehend.

    EbonMuse:

    I doubt this is possible for the same reason I doubt Prediction Machines: because improved cognitive capacity alone does not equal greater understanding of the world, unless you possess correspondingly greater information.

    An AI with a thousand times human level intelligence is likely to have a lot more information than us as well as much better abilities at correlating that information and coming up with new ideas from it.

    EbonMuse:

    And our ability to collect accurate information about the world is limited by the fundamentally chaotic nature of many complex real-world systems.

    One of the reasons that we have trouble predicting complex systems is that we aren’t smart enough to handle all the complexity.

    Eunomiac:

    I certainly don’t think that transhumanists have met their burden of proof on the issue of the Singularity.

    It’ll take the singularity actually happening before the burden of proof is met.

    random guy:

    Actually the primary concern I have with the singularity is philosophical, not technical. Singularity supporters tend to assume that these god-like machines will create new technology for OUR benefit. Resulting in some eden-esque paradise where technology, to paraphrase Arthur C. Clarke, is so advanced it is indistinguishable from magic.

    It might be a good idea to start uplifting a few species to human level intelligence if we get the time, then hopefully we can share a culture of helping others improve with our new overlords.

    random guy:

    All of this of course assumes that such a thing can actually happen, but I’m with ebon in saying that there is no reason to assume that a more intelligent machine would be that much better at developing technology than humans. Math equations sure, computer programs sure, generating hypothesis sure, but anything that requires an actual investigation into the real world will necessarily be slowed down by how fast one can interact with it. On that level super-intelligent machines would have little advantage, if any, over human investigations.

    Actually they are likely to have a very big advantage over us in terms of interacting with the world since we need to use our senses which have a relatively low bit rate.

    Augmentation of humans could help that along, but if you take augmentation far enough you end up with a hybrid between human and AI that might even be mostly AI.

  • random guy

    bestonnet-

    It doesn’t matter how fast their senses take in data. They would still have to operate some form of the scientific method to counteract the assumptions made by their sensory systems.

    For instance from our position the sun appears to revolve around our planet, this is the most intuitive conclusion. Unfortunately our senses make assumptions about position and relative movement that lead us to believe this falsehood. Every system for gaining data is subject to the inaccuracies of the equipment gathering that data. This will be no different for AI.

    AI could posses a great many senses more than a human, they could see into the infrared and ultraviolet light, mass spectrometers and electron microscopes could all be built right into them, their senses could far exceed our own. But really this gives them no advantage, because WE can use those exact same devices the data just has to be “translated” into a form that we can visualize, observe, or conceptualize. All of these perceptions must be run through a methodology for error checking. The scientific method, which is required to gain an impartial and unbiased perception of reality, cannot be speed up exponentially in the same way that processing power may be able to.

    To touch upon what others have said about powerful computers running simulations of reality, that doesn’t really help our understanding of the world. “Garbage in, garbage out” If the machine is fed bad assumptions about the natural law then its resulting predictions are useless. If AI is going to increase the boundary of human knowledge it will have to interact with the real world.

  • bestonnet

    randomguy:

    It doesn’t matter how fast their senses take in data. They would still have to operate some form of the scientific method to counteract the assumptions made by their sensory systems.

    Which would involve thinking faster which is considerably more likely to speed up than the process of gathering the data.

    randomguy:

    AI could posses a great many senses more than a human, they could see into the infrared and ultraviolet light, mass spectrometers and electron microscopes could all be built right into them, their senses could far exceed our own. But really this gives them no advantage, because WE can use those exact same devices the data just has to be “translated” into a form that we can visualize, observe, or conceptualize.

    Except that they could figure out what they are looking at a lot quicker then we would be able to.

    Besides, it’s not like a human could look at them all at once which an AI could be quite capable of doing (augmented humans might be able to, but augmented humans are a stage between human and AI).

    randomguy:

    To touch upon what others have said about powerful computers running simulations of reality, that doesn’t really help our understanding of the world.

    So then you believe that computer models are useless?

    Whilst computer models are something that humans can create an AI would probably be able to do a better job at programming them.

    randomguy:

    “Garbage in, garbage out” If the machine is fed bad assumptions about the natural law then its resulting predictions are useless. If AI is going to increase the boundary of human knowledge it will have to interact with the real world.

    Is it just me or are only those who think AIs won’t be a lot smarter than we are suggesting that AIs will never interact with the world?

  • Mikeyj

    Once we have a working model of the brain, there’s no good reason to suspect that we could not design a AI that would qualativly better at thinking than us.
    It is pure arrogance to assume that the human mind is the greatest possible thinking tool.

  • http://www.daylightatheism.org Ebonmuse

    I grow particularly defensive when transhumanism is equated to a religion. It’s a conceptual bait-and-switch. Yes, transhumanism and religion share a metaphysical tone and a prophetic outlook. This is why the comparison is superficially appealing to a critic — it “feels right.”

    I think the comparison is apt in another way. Let’s put it this way: most of the prominent transhumanists are interested in this topic for more than purely academic reasons. Clearly, they’re anticipating the Singularity for their own sake, because they’re desirous of technological immortality – and surprise, surprise, the people who most want to see it also just happen to have data convincingly demonstrating that it will happen in their lifetimes! What are the odds?

    If the Singularity was presented as one conceivable possibility for the way the future will go, that’s fine. There’s no harm in speculating about that. But my skeptical hackles rise whenever anyone speaks as if they know for certain that it’s going to happen, especially when that eventuality so clearly lines up with their own desires.

    If you find all of this to be absurd, then you’ve either rejected exponential computer development out of hand, or you don’t fully understand its implications.

    Eunomiac, you’ve completely failed to address the argument put forward in this post. I don’t know whether Kurzweil’s numbers are accurate, but it doesn’t matter. Both you and he have committed the same error I’ve been trying to refute: simply assuming that faster calculation equals greater intelligence, overlooking the necessity of a concomitant increase in the ability to gather data. Unless you’re postulating an equal acceleration in our ability to research and experiment – and no one has yet explained how that could occur – then it doesn’t matter how many petaflops you stack up, because all you’ll be buying yourself is the ability to generate incorrect hypotheses more quickly. Any useful, functional intelligence has to base its decisions on facts about the world, and no matter how fast you can think, you can’t simply deduce those facts ex nihilo. Sure, you could program an AI to more quickly absorb facts that have already been discovered; you can’t program it to make new discoveries more quickly than was possible in the past.

    “I think there is a world market for maybe five computers.” — Thomas Watson (1874-1956), Chairman of IBM, 1943

    That’s cute, but it doesn’t prove anything. I can do it too; watch:

    In 1922 Thomas Edison predicted that movies would replace textbooks. In 1945 one forecaster imagined radios as common as blackboards in classrooms. In the 1960s, B.F. Skinner predicted that teaching machines and programmed instruction would double the amount of information students could learn in a given time. (source)

    By the year 2000 all food will be completely synthetic. Agriculture and fisheries will have become superfluous… Disease, as well as famine, will have been eliminated; and universal hygienic inspection and control will have been introduced. The problems of energy production will by then be completely resolved. (source)

    Third dimensional color television will be so commonplace and so simplified at the dawn of the 21st century that a small device will project pictures on the living room wall so realistic they will seem to be alive. The room will automatically be filled with the aroma of the flower garden being shown on the screen.

    Wireless transmission of electric power, long a dream of the engineer, will have come into being. There will be no more power lines to break in storms. A simple small antenna on the roof will pick up the current for lighting a house.

    Public health will improve, especially the knowledge of how air carries infections, like the common cold, from person to person. Before 2000, the air probably will be made as safe from disease-spreading as water and food were during the first half of this century. (source)

    The September 17, 1939 Montana Standard (Butte, MT) ran an article titled, “Foolproof Weatherman of 1989.” (source)

    …the image above… was printed in Collier’s Weekly on January 12, 1901… In the upper left corner we can see a sign in the Collier’s illustration reading, “Youth Restored by Electricity While You Wait.” (source)

    If the population of the United States continues to increase at the rate that has prevailed during the last twenty years in the year 2000 it will reach so great a density there will be room for an average of only one person to an acre in the vast area. (source; for reference, the U.S. has a land area of about 2 billion acres)

    I could list a lot of quotes like this, but I won’t belabor the point. Some predictions about the future, both optimistic and pessimistic, have come true; the vast majority, both optimistic and pessimistic, have failed. We’re not entitled to simply assume that one particular desired outcome falls on either side of that divide.

  • lpetrich

    Ebonmuse, you are right about that. Consider an important ingredient of the Technological Singularity: artificial intelligence. It’s WAY behind the more optimistic expectations of the 1950′s to 1970′s, and this is despite the enormous hardware and software advances since then.

    Consider Alan Turing’s classic paper Computing Machinery and Intelligence, published in 1950. He proposed the Turing Test, which is to test whether some chatbot program is indistinguishable from some human interlocutor.

    I believe that in about fifty years time it will be possible to programme computers with a storage capacity of about 10^9 to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

    His prediction has more than come true about hardware, but has failed miserably about AI. Every chatbot I’ve ever tried has miserably flunked the Turing Test.

  • http://deconbible.blogspot.com bbk

    Ipetrich, I hope you haven’t given any of them your credit card number. They won’t really fly out to hook up with you ;)

  • bestonnet

    Mikeyj:

    It is pure arrogance to assume that the human mind is the greatest possible thinking tool.

    Completely true, it’s also arrogant to assume that we are collecting data the best way possible.

    EbonMuse:

    I think the comparison is apt in another way. Let’s put it this way: most of the prominent transhumanists are interested in this topic for more than purely academic reasons. Clearly, they’re anticipating the Singularity for their own sake, because they’re desirous of technological immortality – and surprise, surprise, the people who most want to see it also just happen to have data convincingly demonstrating that it will happen in their lifetimes! What are the odds?

    You could say the same thing about those who predict a cure for cancer within their lifetime, doesn’t make any difference to whether or not it’ll happen though.

    EbonMuse:

    Eunomiac, you’ve completely failed to address the argument put forward in this post. I don’t know whether Kurzweil’s numbers are accurate, but it doesn’t matter. Both you and he have committed the same error I’ve been trying to refute: simply assuming that faster calculation equals greater intelligence, overlooking the necessity of a concomitant increase in the ability to gather data.

    You haven’t been doing very well at refuting it unless you’ve also got an argument for why computer modelling is a waste of time.

    Unless you’re postulating an equal acceleration in our ability to research and experiment – and no one has yet explained how that could occur – then it doesn’t matter how many petaflops you stack up, because all you’ll be buying yourself is the ability to generate incorrect hypotheses more quickly.

    The ability to reject incorrect hypothesis more quickly would still be useful.

    Although even then, more intelligence does mean more ability to speed up data gathering (and in cases where the laws of physics are already known well enough a sufficiently detailed computer model will work just fine to test out new inventions (and also likely be quite a bit faster than real life)).

    Any useful, functional intelligence has to base its decisions on facts about the world, and no matter how fast you can think, you can’t simply deduce those facts ex nihilo.

    Actually using those facts is much more complicated and is where the big improvements are expected to come from.

    Sure, you could program an AI to more quickly absorb facts that have already been discovered; you can’t program it to make new discoveries more quickly than was possible in the past.

    An AI probably would make new discoveries about the world a bit quicker while discoveries to do with relationships between facts we already know we expect to be sped up massively.

    EbonMuse:

    I could list a lot of quotes like this, but I won’t belabor the point. Some predictions about the future, both optimistic and pessimistic, have come true; the vast majority, both optimistic and pessimistic, have failed. We’re not entitled to simply assume that one particular desired outcome falls on either side of that divide.

    lpetrich:

    Consider an important ingredient of the Technological Singularity: artificial intelligence. It’s WAY behind the more optimistic expectations of the 1950′s to 1970′s, and this is despite the enormous hardware and software advances since then.

    Which is actually what we should have expected since fundamental technological advances tend to run into unexpected issues that delay them from the initial guess as to when they’ll enter into use and how much initial effect they’ll have.

    In the long term though the effects will probably be much bigger than we expect them to be, in many ways the Singularity could just be an acknowledgement of Amara’s law.

    Oh and if you even happen to need more quotes of bad technological predictions:
    http://www.lhup.edu/~DSIMANEK/neverwrk.htm
    http://www.retrofuture.com/

  • random guy

    bestonnet-

    Which would involve thinking faster which is considerably more likely to speed up than the process of gathering the data

    The scientific method does not involve “thinking faster” it is necessarily limited by the rate at which you can collect data from the real world. Machines may become faster at hypothesis and even drawing conclusions than humans, but they will be limited by the physical problems of data gathering, they can’t just create evidence for their ideas ex nihilo. How exactly do you think that being able to calculate faster works out to collecting data, which requires real world experimentation, faster?

    So then you believe that computer models are useless?

    No, but a computer model based on Kepler’s laws of motion, would be inferior to one based on Newton’s laws of motion, which would be inferior still to one based on the theory of Relativity. A computer that generates theories and models within a bubble is of use to no one. Any improved simulation of reality will require further input from reality itself.

    Is it just me or are only those who think AIs won’t be a lot smarter than we are suggesting that AIs will never interact with the world?

    Its just that your assuming that AIs will interact with the world in a way so compleatly different from our own that they will be error free. As I stated in my previous post any sensory system has limits and built in assumptions. Therefore all observations must be balanced by experimentation and controls. That is the only way to weed out bad ideas, no matter how smart a computer gets it will never have a magic circuit that intuitively distinguishes between truth and falsehood.

    If it takes a year to preform an experiment it ceases to matter whether or not a machine can process the data in a second or a thousandth of a second. It still has to wait a year for data to be generated. Even experimentations that can be speed up through a mechanized process will still have to conform to physical speeds of machining and processing. In essence advanced AI would be great for writing new programs, but ultimately would have little advantage if trying to conduct a multi-year study of ecosystems, or test new materials in different environments. Things that can’t be simulated because that only teaches us how the simulation behaves. Machines may be able to design better machines but the would still be limited in how fast they could build them.

    None of this is a problem if you expect a steady, albeit punctuated, increase in technology and science. But thats not what the Singularity proponents are advocating. They are assuming some kind mystical and exponential increase in technology, which I just consider delusional.

  • Christopher

    Random Guy,

    ” no matter how smart a computer gets it will never have a magic circuit that intuitively distinguishes between truth and falsehood.”

    Even human beings don’t know what “truth” is – for all we know, our entire reality is merely a product of our own imaginations. I see no reason why a computer would need such a mechanism (seeing as to how humans don’t have it either) in order to become self-aware.

  • bestonnet

    random guy:

    The scientific method does not involve “thinking faster” it is necessarily limited by the rate at which you can collect data from the real world. Machines may become faster at hypothesis and even drawing conclusions than humans, but they will be limited by the physical problems of data gathering, they can’t just create evidence for their ideas ex nihilo. How exactly do you think that being able to calculate faster works out to collecting data, which requires real world experimentation, faster?

    Analysing data often takes longer than actually gathering it, as does explaining novel experimental results.

    The physical problems for data gathering also involve building the equipment which often has to be designed first (which an AI would be a lot faster at) along with testing the experimental equipment and refining it (which an AI should be able to do a bit faster).

    random guy:

    No, but a computer model based on Kepler’s laws of motion, would be inferior to one based on Newton’s laws of motion, which would be inferior still to one based on the theory of Relativity. A computer that generates theories and models within a bubble is of use to no one. Any improved simulation of reality will require further input from reality itself.

    “Better is the enemy of good enough” is a common slogan and in this case quite true.

    Reality does not need to be perfectly simulated for a computer model to be useful for predicting how an invention will behave, you’d only simulate the approximate laws of physics that apply to that object, not everything (i.e. an AI might simulate most of the things it comes up with in a Newtonian world without taking any account of Quantum mechanics or Relativity to speed things up).

    Computer models are even useful for discovering how nature works since they allow you to get a much more controlled view of already known physics (and therefore to discover something about the theories that already exist that wasn’t known before).

    random guy:

    Its just that your assuming that AIs will interact with the world in a way so compleatly different from our own that they will be error free.

    That is called a Straw man argument.

    random guy:

    If it takes a year to preform an experiment it ceases to matter whether or not a machine can process the data in a second or a thousandth of a second. It still has to wait a year for data to be generated.

    If it would take a team of humans a decade to process the data it damn well would matter.

    Besides, for generating inventions from existing known laws of physics you do not need to gather any new data, just come up with the idea and figure out how to make it work. That’s where I’m expecting the big gains to come from, fundamental science will speed up but it might still be slow enough for an unaugmented human to follow.

    In essence advanced AI would be great for writing new programs, but ultimately would have little advantage if trying to conduct a multi-year study of ecosystems, or test new materials in different environments.

    Such as say a program to simulate the behaviour of materials in different environments to reduce the amount of real world testing needed?

    New programs for how to use nanobots to do new things would to us seem indistinguishable from new technology.

    random guy:

    Machines may be able to design better machines but the would still be limited in how fast they could build them.

    There is a lot of room for improvement there (and once software is designed there’s not really much of a limit as to how quickly it could be deployed, a lot of the inventions will probably be software running on nanomachines).

    random guy:

    None of this is a problem if you expect a steady, albeit punctuated, increase in technology and science. But thats not what the Singularity proponents are advocating.

    “Steady, albeit punctuated”? That doesn’t make sense.

    BTW: How do you know that’s not what the singularity proponents are advocating?

    random guy:

    They are assuming some kind mystical and exponential increase in technology, which I just consider delusional.

    It is you who is assuming a mystical increase in technology, not them.

    The exponential increase in technology is based on quite reasonable grounds as to what would happen when an AI that can improve itself appears. No one has come up with an argument against it that actually holds up to scrutiny.

  • Malenfant

    People much smarter than me, like Sir Roger Penrose have already stated that our current digital computers probably won’t ever develop AI, because they are deterministic machines while the thought process may be based on quantum functions.
    Also see John Searls example with the chinese Room : Computers are manipulating Symbols according to given rules. There is no semantics for them in these Symbols.

  • bestonnet

    Malenfant:

    People much smarter than me, like Sir Roger Penrose have already stated that our current digital computers probably won’t ever develop AI, because they are deterministic machines while the thought process may be based on quantum functions.

    Sir Roger Penrose is very likely wrong.

    http://www.mth.kcl.ac.uk/~streater/stapp.html has some good reasons to suspect that our thought processes are not based on quantum processes (and even if they were it still wouldn’t make AI impossible, merely change what we’d have to do to get it).

    For Strong AI to be impossible, dualism would have to be true, do you really want to go there?

  • Malenfant

    bestonnet :
    People like Minsky, Moravec etc. have long promised strong AI for decades now, yet they have delievered next to nothing, while in the same time CPU power has expot. risen.
    Even Insects are smarter than computers, especially in pattern recognition computers are really lacking. They still maintain AI or Intelligence per se may be an ‘Emergence Phenomenon’ which will practically come out of sufficent complexity (of neurons or circuits) by itself. But they still have to prove that.
    Also, see may last argument, if you don’t know it, look here http://en.wikipedia.org/wiki/Chinese_Room

  • http://deconbible.blogspot.com bbk

    Malenfant, the Chinese Room problem is a fallacy that has been debunked. And bestonet had to post his link twice, which shows just how much pseudoscience got thrown into the quantum entanglement / faster than light speed theories.

    Now, the quantum brain matter ideas are fairly extreme and uncommon hypotheses. But *many* people in both academia and in the media still go around touting the Chinese Room thought experiment as something valid, when it is absolutely not.

    Here’s a link to one that goes over how the Chinese Room thought experiment had been debunked: Searle. I have seen the Chinese Room described as a fallacy both on the web and in books. It has been debunked a long time ago. It’s unfortunate that even people who know how to debunk it, like the author of that site, have a hard time finding others who understand why it is wrong.

  • Malenfant

    I would like to answer, but for some reason my post don’t get through or get censored/lost.
    The chinese room example however has nothing to do with quantum or light speed theoriies. Seek Wikipedia for the Chinese Room example. I would like to know who has ‘debunked’ it, it merely says that symbol-manipulating machines :
    From WIki
    (A1) “Programs are formal (syntactic).”
    A program uses syntax to manipulate symbols and pays no attention the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn’t know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
    (A2) “Minds have mental contents (semantics).”
    Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
    (A3) “Syntax by itself is neither constitutive of nor sufficient for semantics.”
    This is what the Chinese room argument is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.
    Searle posits that these lead directly to this conclusion:

    (C1) Programs are neither constitutive of nor sufficient for minds.
    This should follow without controversy from the first three: Programs don’t have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore programs are not minds.

  • http://deconbible.blogspot.com bbk

    Fortunately, typing “Chinese Room Fallacy” into google brings many pages of websites that debunk it.

  • Malenfant

    People like Minsky, Moravec etc. have long promised strong AI for decades now, yet they have delievered next to nothing, while in the same time CPU power has expot. risen.
    Even Insects are smarter than computers, especially in pattern recognition computers are really lacking. They still maintain AI or Intelligence per se may be an ‘Emergence Phenomenon’ which will practically come out of sufficent complexity (of neurons or circuits) by itself. But they still have to prove that.
    However, now, i found an counter argument that may have some validity (won’t post the link or i get censored again (“Why the Chinese Room Doesn’t Work”). But to get there is gets all the way down to the atomic or even subatomic level. So the conclusion may be there won’t be AI unless the invention of Quantum or Bio-Computers.
    Not even to mention that AI alone is not sufficent for the occurence of a ‘Singularity’ (which was the actual theme of this thread).
    )

  • http://deconbible.blogspot.com bbk

    Malenfant, you don’t seem to know what you’re talking about. The fallacy of the Chinese Room argument is that it claims that a system cannot have understanding unless some part of its system also shares this understanding. You still don’t see why this is a fallacy? It’s mind-numbingly simple.

    The fallacy is so obvious it makes the whole thought experiment look silly when you just apply the same standard to your own brain. Since your brain itself is made out of components, then you cannot understand Chinese unless one part of your brain also understands Chinese. And that part cannot understand Chinese unless some underlying cellular structure also understands Chinese. And the cellular structure cannot understand Chinese unless an underlying molecule understands Chinese. And that molecule can’t understand Chinese unless some atom understands Chinese.

    According to the Chinese Room thought experiment, humans shouldn’t be capable of intelligence, either!

  • bestonnet

    Malenfant:

    The chinese room example however has nothing to do with quantum or light speed theoriies.

    True, but that doesn’t make it any less a fallacy.

    Malenfant:

    Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.

    Why do our thoughts have meaning?

    Malenfant:

    People like Minsky, Moravec etc. have long promised strong AI for decades now, yet they have delievered next to nothing, while in the same time CPU power has expot. risen.

    Could it be that we still don’t have enough CPU power to do it?

    Neural networks have made quite a bit of progress.

    Besides, as I have said repeatedly, we should expect new technologies to take longer than predicted to come about so this is really not an argument against anything.

    Malenfant:

    But to get there is gets all the way down to the atomic or even subatomic level. So the conclusion may be there won’t be AI unless the invention of Quantum or Bio-Computers.

    Quantum computers and bio-computers have already been invented and built so if that were what was required to build an AI we’d still be able to do it, it would just delay things a bit.

    Malenfant:

    Not even to mention that AI alone is not sufficent for the occurence of a ‘Singularity’ (which was the actual theme of this thread).

    AI that can improve itself would be though (unless the human mind is as good as it gets which is pretty damn unlikely).

    Malenfant:

    According to the Chinese Room thought experiment, humans shouldn’t be capable of intelligence, either!

    If dualism were correct then the Chinese Room thought experiment would make sense.

    Ultimately all arguments that strong AI is impossible turn out to assume dualism when you look at them closely enough.

  • http://deconbible.blogspot.com bbk

    If dualism were correct then the Chinese Room thought experiment would make sense.

    I beg to differ. If dualism were correct and our power to think came from some metaphysical property, then the Chinese Room thought experiment would still be begging the question and still contain an ad-hominem attack against AI proponents and still be a bait and switch tactic. Just saying…

  • bestonnet

    Perhaps I should have said that the only way the Chinese Room thought experiment could possibly make sense would be if dualism were correct.

  • http://deconbible.blogspot.com bbk

    I do take it in that order of precedence. Of all the fallacies in the Chinese Room, the necessity of dualism is the biggest one. This is especially so because as part of his Chinese Room argument, Searle himself said that the human brain is a purely biological machine (no dualism).

    Searle also claimed that he was challenging the Turing machine with his thought experiment. Yet, what he had set up was not even a fully functional Turing machine. You cannot re-program it. It’s hardly even a calculator. That’s a very audacious thing to do when you’re trying to disprove the possibility of AI!

  • lpetrich

    It seems to me that Searle’s Chinese Room argument is based on the Fallacy of Composition, which states that composite entities must necessarily have the same properties as their component entities. That is something that is true of some properties and not true of other properties, so one has to look at the specific entities and properties in order to come to any conclusion.

  • bestonnet

    It does appear that way.

  • Gralgrathor

    On the other hand, human society has been through several minor singularities already. Who could’ve predicted how the advent of motorized personal transport would change our way of life? And electronic communications? Advanced electronic computation and data processing? Television? Cell phones?

    We live in a chaotic world, and it is very nearly impossible to see the full ramifications of any particular development in the long term.

    On the other hand, we do witness an exponential curve. We do witness change after change following one another with ever increasing rapidity. Teenage slang changing as it is being reported, and more frighteningly: the previous generation being very nearly able to comprehend it because of new media. New media themselves: who’s got a bluetooth headset? Who synchronizes documents when passing a convenient hotspot? Who carries around their own personal area network (PAN) consisting of headsets, phones, notebooks and PDA’s? Environmental interconnectivity and intelligence are increasing just as fast as the speed of single-type processors, or the number of switches per nanometer. Is it then really so hard to imagine a time when that environment is so complex, so fast changing, that it is beyond the ability of an unaugmented human mind to comprehend?

    We’re not expecting gods to step out of our networks and part the seas, you know?

  • http://www.daylightatheism.org Ebonmuse

    I wrote about John Searle’s Chinese Room analogy here.

  • http://deconbible.blogspot.com bbk

    Ebonmuse, I believe your post on the machine was pointing out a simple fact – Searle’s machine was not even a Turing machine, but just a stateless lookup table. Presumably, Searle could have opened the door to his Chinese room, taken out the human, and let people walk inside and look up words in the book for themselves. By golly, he’s invented a phone book. I do consider this to be a bait and switch tactic.

    Ipetrich pointed out the name of the fundamental fallacy – the fallacy of composition (thanks Ipetrich). Searle proponents such as your previous commenter likes to call this “The Systems Response” as if that somehow avoids the fundamental thrust of the response – that the Chinese Room is a fallacy. There’s no point in going past this point and discussing the consequences of the thought experiment when at its based on a fallacy. But the other thread has perfect examples of your readers doing just that.

    But yet they go on, using a faulty machine (not a Turing machine) and fallacious logic to argue that the real difference is that Searle’s system is purely a symbol manipulation mechanism that gives us no insight into human understanding. Well “duh”, if it’s not even remotely capable of doing what a computer does, then you can make whatever point you want about it. But it’s got nothing to say about computers.

    To extend your argument, Ebon, a real Turing machine could easily have a contextual understanding if we simply gave it all the pieces that Searle’s proponents claim are missing. We give it a dictionary, thesaurus and a picture dictionary. And we give it the ability to learn new words by allowing users to slide a picture, a word, a definition, a list of synonyms under the door. A true Turing machine not only remembers its previous state (context), but it can learn by being re-programmable.

    Wait, now doesn’t that sound like begging the question? Searle set out to prove that a Turing machine is incapable of giving us insight into human understanding by designing a machine with the explicit purpose of giving us no insight into human understanding. I did say he was begging the question earlier, and this is why.

  • http://www.blacksunjournal.com BlackSun

    Ebonmuse, have you really read The Singularity is Near cover to cover? I’m going to assume, based on your post, that the answer is no.

    At the back of the book, Kurzweil answers nearly every one of his critics’ arguments, including virtually every point you’ve raised here.

    I think you set up a straw man of Singularitarian claims (god-like super-intelligence) and then easily knock it down. At least Kurzweil’s view is far more nuanced and limited, and allows for far more uncertainty than you give credit for.

    As for greater ability to gather data, are you kidding me? The interconnection of the internet will increase just as exponentially as computing power. Sensors, cameras, cell phones and biometric data are already allowing some truly freaky pattern recognition to occur whereby the grid is aware of who you are, where you are, what you are doing, who you are with, etc. That information has not yet been effectively brought together, but it will be. The grid/web will have trillions of available data gathering nodes of all types. Both the natural and human world will be positively brimming with instrumentation. We can expect sensors to shrink in size, drop in cost, and increase in sensitivity to the point where even frivolous uses would be cost effective. Applications? A bartender could have a display of whose drinks were getting empty. So every product, even disposable cups and such will probably wirelessly transmit some kind of state information.

    In short, we don’t know where this is headed or what the Singularity will or will not be. It’s an extrapolation of trends with some possible implications that point in a general direction, not a prediction. All the predictions you showed which turned out to be laughably wrong were extremely specific.

    Kurzweil’s predictions about levels of machine intelligence have been holding true since he started making them in 1990.

    For someone of your awareness to be able to look at the amazing and converging progress in Computing, Communication, Genetics, Nanotechnology, and Robotics and concluding something like artificial human-level intelligence will never happen is, I’m sorry, really troubling. There has to be some level of denial. You have to have some personal stake in this not happening–in the permanent and irrefutable specialness and supremacy of humanity over machines–to be this blind. As some other commenters have remarked, it sounds almost like dualism. When you talk about having to “understand the brain fully in order to understand even a part of it.” it’s like you think there’s some force or property of the brain that’s inherently incomprehensible and irreducible (shades of ID there). If you’re not a dualist, and if you reject supernaturalism, what could possibly be in a physical brain that could not be reverse-engineered and duplicated, given the time??

    I predict you will sound as laughably wrong about this by 2040 as Napoleon or Lord Haldane. But what do I know? That’s a personal, not a scientific prediction. I admit I don’t know. I hope I live to find out. The Singularity sounds plausible, but I don’t really know. On this point, you think you know, and you’re basing your opinion on a “feeling”–it’s “argument from personal incredulity,” not really from hard evidence. (Since you can’t prove a negative). That’s the most troubling part. Your anti-singularity fervor (and this is not the first statement you’ve made about it) does not seem to be rational.

    I could totally understand if you said it was unlikely. But you said “I reject the notion.” That’s pretty strong. You have to admit humanity may figure out how to construct and program an artificial brain to the level of human intelligence. Even if they just duplicated it in software, or genetically produced it in a lab. It’s possible. By Moore’s Law, we will have the computing power of a human brain for $1,000 in just 12 years.

    Someone could also figure out how to wire up both natural and artificial brains to the internet. Simple forms of direct neural interface have already been accomplished. If those two things happened, then some form of the Singularity is possible.

    Come on, Ebonmuse. You’ve taken a pretty indefensible position. Come back to rational skepticism. Admit you don’t really know. Admit it’s possible.

  • http://mindstalk.net Damien R. S.

    Problem is that there’s a bunch of versions of “the Singularity”. The original was better-than-human intelligence, period (Vinge). Then an idea that as we improved our intelligence, so the next generation could improve theirs, even faster because they were smarter. (fallacy: they might be more complex as well, taking more work to self-improve.) Then faster and faster technological progress as a spinoff. And then AI, nanotech, and uploading collided into an AI God scenario where an AI wakes up and takes over the world in days, uploading us all into cyber-Heaven.

    Which is probably ridiculous, but just because some people have latched onto that as “the Singularity” doesn’t make the original version of “when we can make beings smarter than us then weird things happen” go away. And note I said beings, not machines; Vinge’s four paths to Singularity were AI, cybernetic enhancement of humans, cybernetic links *between* humans (groupminds), and purely biological enhancement of humans (which itself could be genetic, drugs, manipulated brain development, or others.)

    Conversely, AI or uploads could make things go very weird very quickly without being smarter at all. Unaging beings who can be duplicated at will. Data collection a bottleneck? Mass produce your scientists. Motivations will probably strongly if not perfectly manipulable. And what happens to liberal democracy when you can duplicate a voter in a few minutes, or send off your own suicide bombers?

    Personally I kind of think “the Singularity” is a poisoned term, and prefer to talk of a Cognitive Revolution, what happens when we can manipulate mind.

  • bestonnet

    Damien R. S.:

    Problem is that there’s a bunch of versions of “the Singularity”. The original was better-than-human intelligence, period (Vinge).

    The original notion came from I. J. Good in 1965.

    Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. (Source)

    Damien R. S.:

    Then an idea that as we improved our intelligence, so the next generation could improve theirs, even faster because they were smarter. (fallacy: they might be more complex as well, taking more work to self-improve.)

    Despite the fact that they can do more work in a given time and handle more complexity than us humans. Even a simple speed increase would be worthwhile without an increase in complexity and it really is a fallacy to assume that it won’t happen.

    Damien R. S.:

    Which is probably ridiculous,

    So is heavier than air flight but that doesn’t stop anyone from getting on a 737.

    Damien R. S.:

    Vinge’s four paths to Singularity were AI, cybernetic enhancement of humans, cybernetic links *between* humans (groupminds), and purely biological enhancement of humans (which itself could be genetic, drugs, manipulated brain development, or others.)

    Yes, if a human were uploaded they might end up starting it, even just an augmented human might be able to start it although AI does look more promising.

    Damien R. S.:

    And what happens to liberal democracy when you can duplicate a voter in a few minutes, or send off your own suicide bombers?

    That will be interesting to see.

    One thing I can say is that it is essential to liberal democracy that we not stop transhumanism as some idiots have suggested (it’s not like a country could stop technological progress and remain a liberal democracy anyway).

    Damien R. S.:

    Personally I kind of think “the Singularity” is a poisoned term, and prefer to talk of a Cognitive Revolution, what happens when we can manipulate mind.

    Which would be just like replacing “atheist” with some other word (bright, etc), completely pointless.

  • http://stepping-stones.livejournal.com/ D

    OK, nobody’s said anything here for over two months, but I still worry about hijacking, so I’ll try to keep this short:
    A: I agree that computational power does not equate to knowledge – though I would like to point out that another definition of intelligence is “the capacity to learn,” which improved computational power would in turn improve, but you seem to have said as much in different words in earlier comments.
    B: I agree that there won’t be god-like intelligences running around like in The Metamorphosis of Prime Intellect. As cool a story as that is, there are good, solid obstacles standing in the way of anything like it (and they’re not just technological). No matter how wild the improvements, there will be a plateau.
    C: If by “technological singularity” we mean “a point where all current understanding [of technological progress] breaks down,” then this has already happened: consider the discovery of fire, the invention of agriculture, the Industrial Revolution, and the advent of the microcomputer – before each of these developments, nobody could have predicted the progress of our technology afterwards. Only in this sense is the analogy to a gravitational singularity apt: “magic” doesn’t happen on the other side of the event horizon, we’re just not sure and in a huge way. Thus, I think this is how technological singularities should be understood.
    D: “True AI” – self-aware machine-based intelligence – could accelerate scientific progress in unpredictable ways, for the simple reason that an actual intelligence capable of directly interfacing with the instruments of experimentation with mechanical precision could speed up every single step of the process with the sole exception of the time it takes for the results themselves to be observed. I think this is exactly the kind of acceleration of scientific progress to which you object in your post. Again, it would not be limitless, but it would be unpredictable, which is the point of a singularity.
    E: If self-modeling is the key to self-awareness (and thus intelligence), then I think that the Singularity (capital S means the AI singularity, at least for purposes of this comment) could happen in the next 20 years. I think that a talented programmer could, with existing technology, create a genetic algorithm designed to model and improve itself (such that it is the program it’s working on). Crude though it may be, if it developed self-awareness, then it would be the Singularity.

    A couple things to keep in mind are that 1) the Singularity is a technological milestone, not the products or progeny of that milestone; and 2) the hallmark of a technological singularity is that all models for predicting our progress become unreliable, not that progress becomes unlimited. There will always be a plateau, but a singularity has occurred when we can’t say with any degree of certainty just where that plateau will be. I agree that the Singularitarians who expect “unlimited” progress are thinking magically and don’t properly understand what they’re talking about, and a more conservative outlook is called for. However, I also think that the most extravagant predictions (except the totally impossible ones) are plausible to a degree (albeit a small one), for the reason that Socrates could not have conceived of the LHC – it is so far beyond his experience that he would have no way to think of it, it would be utterly alien to him, and so things may become to us due to a technological development. Maybe even in our lifetime. Some such developments will doubtless occur in the wake of the next technological singularity, AI or otherwise.

    (If that’s not too much already, I have a more in-depth version of this response – especially point D – on my “yes I’m vaguely ashamed to admit I have a” livejournal, which is linked as my website (same date as this comment).)

  • bestonnet

    There is always Clarke’s third law.

  • Burne

    Two problems I have with singularity is it is often conflated with magic, as if technology is magic, when it isn’t, despite Clarkes statement on it. AIs uploads, all have to exist in the real world(whatever that is), all depend on obeying physical laws in our universe, for example nanotechnology is only as good as the materials which make it up, it can mine for specific resources already there, but it can not transmute materials into others,unless we are talking about a whole nother level of technology indistinguishable from magic category. What are the tradeoffs, if it exists in the real world any technological advance has tradeoffs, and given limits that are in existence in real world technologies, what resource problems are there(or is the argument its magic and wouldn’t have to worry about that).

    Second is problem of artificially speeding up thinking speed, while its seen as a great thing to increase the evolution of thinking from a viewpoint of technological advances, how does the AI respond by having its subjective time effectively increased to x vs. real time. In other words how does time affect the AI over what could be subjectively 10,000 years for ever minute(or whatever), thats longer then human civilization has ever existed, long enough for anything to happen, yet the time is just seen as inconsequential in being a actual tradeoff. Would the AI even exist that long, and how are you designing it to exist that long if it does?

    How is the system being designed to even have a goal structure over that long?, considering we can’t design something to last that long even now?

    Second detatchment from reality, we see this all the time with people living in fantasy worlds(world of warcraft whatever), if the AI or Upload can design a reality as good as ours, what’s to prevent it from living in it, what’s the kick? keeping it here in our reality, and related what is to prevent its own artificial reality from attaining more validity in formulation of various hypothesis or actions then our reality, in other words what’s to prevent it from going insane, rather like people do all the time?