The End of Moore’s Law, and Beyond

YouTube Preview Image

Moore’s Law–which states that the number of transistors that can be placed in a circuit doubles every 18-24 months–is already coming to an end. The exponential growth in the the way silicon circuits are designed and manufactured has driven much of the technological progress of the last 40 years or so, but it can’t continue forever. This is why the techno-Utopians who prattle on about “exponential growth” weary me so: exponential growth is never sustainable. Eventually, you hit the limits of your materials, your creativity, your power, your programming, your economics, or any number of other factors that inhibit the uninterrupted development of technology.

Michio Kaku, a theoretical physicist, does quite a nice job in the embedded video of explaining the inevitable end of Moore’s law, and the challenges we face in moving seamlessly from silicon-based processors to other kinds of computing, such as molecular and quantum computers. He’s placing the “collapse” of Moore’s Law 10 years out. I think we’re already at the beginning of that collapse, and although there are some fascinating candidates for a post-silicon future, a lot of them remain little more than tantalizing theories and wonderful experiments.

And all of them run into a single blunt reality: there is no Moore’s Law for programming. Programming advances at a linear rate. There’s even a corollary for Moore’s Law–sometimes called May’s Law–which states that while processing speed doubles every two years, software efficiency halves in the same time period. Obviously, this is meant more as commentary on Moore’s Law and its limits than as a real corollary, but there is a kernel of truth in it: programming does not keep pace with processing. That’s just a hard fact of computing, and techno-Utopians have never understood it.

Embracing Mystery
Police Targeting Waze
Why Obama Was Wrong About The Crusades
Have I Mentioned Lately How Much I Hate Robots?
About Thomas L. McDonald

Thomas L. McDonald writes about technology, theology, history, games, and shiny things. Details of his rather uneventful life as a professional writer and magazine editor can be found in the About tab.

  • Dennis Mahon

    May’s Law–which states that while processing speed doubles every two years, software efficiency halves in the same time period.

    I’ve never heard of this before; is this because the programming language cannot keep up with the demands we put on it?

  • Thomas L. McDonald

    “May’s Law” isn’t so much a law as it is a commentary on the inability of programming to match the pace of processing speed. There are a couple of issues: first, more space and power tends to lead to expansive, sloppy programming. (Code expands to fit the available space.) A couple decades of progress in hardware has seen massive software bloat, not (as a rule) better software. Second, programming languages simply do not develop exponentially. They hardly develop at all. The push into parallel processing has its own challenges, since parallel programming is very hard. Here’s a good summary that explores it in more depth than I can offer, since I’m not a programmer:

  • Dennis Mahon

    After reading the article, I think I can grasp as so:
    We begin riding a tricycle, and our skill level is acceptable.
    We then move to a bicycle, and our skill level must improve until we can master that.
    Then we move to a car, and our skill level moves into that level of complexity to master that.
    From there, we move to piloting a plane, and our skill level must change to master that requirement.
    So it is with programs.
    (Am I making any sense, or am I just embarrassing myself?)

  • Pingback: Assuming Moore's law will be correct for the next 100 years. How long do we have before artificial intelligence takes over the planet? - Quora()