I just finished reading the current version of Luke Muehlhauser’s website/ebook Facing the Intelligence Explosion. I have a handful of minor complaints (for example), and there was one thing that kinda made me want to scream (which I’ll talk about in a future post), but on the whole, Luke makes a lot of important points and I highly recommend it for anyone who’s remotely interested in these issues.
The website has been around for awhile, but it’s been revised over time, and reading the latest version I liked the revisions. Among other things, it alerted me to this essay by Alan Turing which seems to have beaten even I. J. Good to the punch on the idea of self-improving machines.
The current version covers the full range of the topics typically encountered on LessWrong, particularly Eliezer Yudkowsky’s writings: rationality, AI, and even utopia. While some LessWrongians will tell you to read all of “the Sequences,” Luke’s shorter version is extremely valuable if you’re not willing to devote quite so much time to reading about those issues right now (personally, I’ve loaded the Sequences onto my Kindle, but am less than 10% of my way through them). I also intend to refer back to Facing the Intelligence Explosion when working on these issues in the future.
The online form (15 chapters/webpages of roughly blog post length) is pretty readable as-is, but if you’d prefer it in PDF or an e-reader format, you can buy those here on a pay-what-you want basis.