Will Moore’s law fail us?

As a follow up to my “Fifty year intervals” post: will changes in computing technology in the next fifty years really be as radical–or even more radical–than the “punch cards to smartphones and Google’s entire product line” transition of the past fifty years?

A simple argument for the answer being “no” is that improvements in computer hardware have generally depended on shrinking transistor sizes. In fact, the strict interpretation of Moore’s law is that it refers to transistors specifically, not computing power in the abstract. And not many people know this, but we’re rapidly approaching the point where it will be physically impossible to shrink transistors any further, because their size will be measured in atoms.

How rapidly? In a 2001 article, Ray Kurzweil (who if anything you’d expect to be over-optimistic about this issue) said that, “Moore’s Law will die a dignified death no later than the year 2019.” Somewhat more recently, Sandberg and Bostrom cite an industry report projecting current trends out to 2022. So some time around then, or not too long after, we can expect to run out of ability to shrink transistors.

However, there are proposals to squeeze more computing power out of every dollar, every pound, what have you, that don’t involve shrinking transistors. Both of the above links discuss a number of them. And there’s at least some reason to be optimistic about them; I’ve researched this issue as part of my work for MIRI, and everyone who’s looked at it seems to agree that the current exponential growth in computer power predates the integrated circuit, though there’s disagreement about whether it goes back to the early 20th century or just to WWII.

Sandberg and Bostrom write:

Pessimistic claims are often made to the effect that limitations of current technology are forever unsurpassable, or that theoretically possible technological systems such as the above will be too costly and practically difficult to ever become feasible. Still, given that computer technology has developed in a relatively stable manner despite several changes of basic principles (e.g. from flip‐flops via core memory to several semiconductor generations, from vacuum tubes to transistors to integrated circuits etc) there is no strong reason to assume they will break because current technology will eventually be replaced by other technologies. Large vested interests in continuing the growth are willing to spend considerable resources on closing technology gaps. A more likely end of growth scenario is that the feedback producing exponential growth is weakened by changes in the marketplace such as lowered demand, lowered expectations, rising production facility costs, long development times or perhaps more efficient software.

Note that hardware isn’t the only issue here. There’s also software. If current trends in computer hardware continue, will we actually be able to put all that computing power to good use? On the one hand, a lot of the things we use our massive computing power for are pretty boring, like Facebook. On the other hand, if you survey Google’s whole range of products, from the under-appreciated but important Gmail spam filter to driverless cars, there’s some really cool stuff in there, and they’ve got ambitious plans for more.

I’m curious to know what other people think of this. I can’t claim to know very much about how plausible the alternatives to the “shrinking transistors” strategy for making more powerful hardware are. Still, when I ask myself what I really think of this issue, I feel fairly confident that computing power will continue to increase fairly rapidly, if maybe not as rapidly, far into the foreseeable future.

Why do I feel so confident? I’m not entirely sure. Maybe I’m being tricked by the fact that I’m still young and Moore’s law has been in effect for my entire lifetime, so I just can’t imagine life without it. On the other hand, if enough people can’t imagine life without it, and there’s any physical possibility of pushing forward past the limit of our ability to shrink transistors, then maybe it will become a self-fulfilling prophecy.

What do you think?

  • JohnH2

    From AI to Fusion power to improved batteries to room temperature superconductors to reusable single stage to orbit rockets, there are a lot of examples things which are highly desirable not yielding easily to vast amounts of effort and money being thrown at it.

    Moore’s law will fail us in the next few decades but improvements in computing will continue for quite a long time afterwards. The problems facing the DOE’s exascale computing project are not really related to processors at all. Instead there are problems with working memory, with communication, with loading and unloading data from hard disk, and hugely with power requirements (building a dedicated power plant for a supercomputer is supposedly not an acceptable option).

    Surprisingly enough some of these same problems appear at the other scale; smart phones will hit limits very soon without continued improvements in power usage as improvements in batteries is not following anything like Moore’s law. While the prediction is that when we reach exascale supercomputer that the average person will be able to have a Teraflop cell phone at current levels of improvement in power usage and batteries that phone will last just a few minutes and be significantly heavier then current phones, or I suppose people will have a backpack that provides power to the computing devices they carry. (The prediction is also that petascale desktop computing will be available at that point).

    Already the top Petascale supercomputers are not able to load data onto hard disk at a rate fast enough to capture everything that is computed so continuing to throw more computing power at problems as has been happening won’t help to actually solve the problems. If Moore’s law fails and we haven’t addressed these other problems but then start successfully doing so then from a users standpoint it will appear as though Moore’s law has continued unchanged, until some local optimum is reached I suppose.

  • Question Everything

    In a sense, we’re already having trouble putting hardware to good use. After all, one of the primary ways around Moore’s Law (strict definition) is multiprocessor, “cell” technology like the PS3 uses, handing off processing to video cards / GPU components, and so on.. but these are much harder to program for, and they’re not nearly as extendible as Moore’s Law has been for these past decades, since you can only pack so many processors into a box before it melts or needs dedicated power lines to operate.

    We need code in between what the programmer wants – fast operation – and what the hardware can provide – single, multiprocessor, cell, etc. There are various libraries for this right now, but they’re not in common use, nor are they necessarily easy to use. Most of the time, the easier they are to use, the less useful they are in terms of speed and performance. Compare this with game tech like Havok or the Source Engine, which help make physics and general game creation easier respectively, and do it rather decently compared to pthread, MPI, or Intel’s Thread Building Blocks for threaded programming for multiple processors.

  • Cyrus Draegur

    “Why do I feel so confident? I’m not entirely sure.”

    …You’re a good man, Hallq. It takes a lot of guts to admit when you don’t know something, especially in published (or even e-published) material. After all, a lot of folks like to take admissions of not knowing everything to be a sign of weakness for some stupid reason, when what we should really be remembering is something in particular that William Shakespeare wrote:

    “The fool doth think he is wise, but the wise man knows himself to be a fool.”

  • Pingback: yellow october


CLOSE | X

HIDE | X