David Chalmers On “The Singularity”

What happens when machines get smarter than humans?  Presumably, they will build machines smarter than themselves which will build machines smarter than themselves and onward towards infinity. Philosophy Bites interviews David Chalmers, a leading philosopher of mind, about the concept and its possible realization.

Thanks to 3QuarksDaily for the heads up.

Your Thoughts?

About Daniel Fincke

Dr. Daniel Fincke  has his PhD in philosophy from Fordham University and spent 11 years teaching in college classrooms. He wrote his dissertation on Ethics and the philosophy of Friedrich Nietzsche. On Camels With Hammers, the careful philosophy blog he writes for a popular audience, Dan argues for atheism and develops a humanistic ethical theory he calls “Empowerment Ethics”. Dan also teaches affordable, non-matriculated, video-conferencing philosophy classes on ethics, Nietzsche, historical philosophy, and philosophy for atheists that anyone around the world can sign up for. (You can learn more about Dan’s online classes here.) Dan is an APPA  (American Philosophical Practitioners Association) certified philosophical counselor who offers philosophical advice services to help people work through the philosophical aspects of their practical problems or to work out their views on philosophical issues. (You can read examples of Dan’s advice here.) Through his blogging, his online teaching, and his philosophical advice services each, Dan specializes in helping people who have recently left a religious tradition work out their constructive answers to questions of ethics, metaphysics, the meaning of life, etc. as part of their process of radical worldview change.

  • The Vicar

    I categorically refuse to listen to arguments presented as audio or video (learn to write, kids! I don’t listen to radio for a reason!) so this may be covered in the link, but:

    The usual notion of the Singularity is built around the unspoken assumption that it is not only possible for humanity to build an artificial consciousness, but to build one which is better in some sense than the one which can be found in the head of your average human being. There is zero proof that this is true, and the bulk of historical examples suggest that it is not. In the real world, the only branch of AI research which has produced anything resembling significant results has focussed on duplicating what skilled humans do, without trying to understand or improve on the inner mechanisms. To date — and this statement could have been made at any time in the past 30 years, and is likely to continue to be true in the future — AI has been a huge sinkhole for investment with very little positive result.

    Many of the woo-merchants who like to talk about this concept as though it were proven go even further: not only is a better consciousness possible, but each such consciousness will (being better than the previous ones) be able to produce a successor which will be even better, ad infinitum. Somehow I am reminded of G. K. Chesterton’s statement that when we see that one puppy has grown larger than the others in the litter, we know that it must inexorably become larger than the moon, and if the lawn has grown taller than your shoes it will obviously someday be taller than the house.

    Yes, there are less woo-filled definitions of “the Singularity” — and once you clear away the idiocy, we’re almost there already. I already have artificial extensions to my memory; they are the Internet and my various references. I have artificial extensions to my reasoning capacity — there are all sorts of programs I can load on my computer to aid me. The only reason I’m not a cyborg is that I don’t need to be one to use my tools. Yet being in such a state doesn’t seem to be doing most of us any good — our economies, our ecologies, our cultures are in the midst of devastation at the moment. Clearly, passing through the Singularity is not an unalloyed pleasure.


CLOSE | X

HIDE | X