It’s hard to plan 75 years into the future

Here’s another thing thing to fuels my LessWrong-inspired “trying to think about the future” kick:

This is not a problem specific to Paul Ryan, but it always strikes me as absurd that it’s considered the very height of Washington, D.C., seriousness to pay lots of attention to 75-year budget projections. Look at that chart and roll it backward in time to the year 1940. Europe is at war, and the United States is clearly casting its lot with the United Kingdom but not directly involved on a military level. People are wondering if tough economic sanctions on Japan will be enough to dissuade it from its brutal efforts to conquer China. There are no antibiotics, jet planes, or television networks. Somebody tells you he has an idea about changing federal health care policy. You ask him, as if it’s the most natural question in the world, “What are the implications of your proposal for the federal budget in the year 2012?”

He sits down at a desk with his slide rule to try to figure it out. If he’s lucky, the calculation has a high priority and he’ll have the budget necessary to hire some human computers to assist him with the calculations. He’s of course going to miss the invention of the nuclear weapons, the Iron Curtain across Europe, and a decades-long Cold War, but perhaps it’s not such a big deal because of course by 2012 the Cold War will be over and the Iron Curtain torn down. Little does he know that in 2009 there will be a furious debate about the implications of digital medical records for long-term health care costs because at the time of his work there are no digital records of anything.

On the one hand, even if we really do experience The Singularity 75 years from now, this kind of thing makes me skeptical of how much an organization like the Singularity Institute or the Future of Humanity Institute can do now to make sure it turns out well, because how it turns out will be affected by all kinds of developments that we’re in no position to predict right now. On the other hand, this should make us wary of underestimating how much the world is likely to change in the next 75 years, and about assuming current positive trends will continue into the future.

  • eric

    Long-term budget projections are nothing more than political tools. When politicians want a tax cut, they’ll quote the total savings over the entire period in order to inflate the number. Even worse, they will design a cut to increse nonlinearly over time, so that even a smart layman who calculates the yearly savings will greatly overestimate the benefit the cut will produce in first year (when the cut actually exists).

    When politicians want a new program, they do the reverse: they spread the payment for it out over a long period, in a nonlinear fashion with most of the costs coming at the end, and then quote the 1st year cost to the public. Sort of like an unethical bank trying to sell you an ARM but only telling you what your first year’s mortgage payments will be.

    Having said all that, it is probably a good thing for the government to think about long-term costs and benefits of policies. ‘We don’t focus enough on the short-term’ is NOT government’s problem. :) We just shouldn’t take the projections too seriously.

  • Dave X

    Looking at it the other way, if the Very Serious People can creditably fear the risks of simple financial and economic models extrapolated out 75 years into the future, they should similarly take seriously the risks identified by climate or other science-based models.

  • piero

    If history teaches us anything at all, it is that technological advances will first be used as weapons, and then made available to the general public. This has been the case with the jet engine, computers, GPS and even the Internet. It is therefore reasonable to assume that superhuman intelligences will be first be used as weapons.

    What strikes me as rather odd is that most discussions concerning the singularity assume that such superhuman intelligences will also possess automotive capacities, and mechanical accessories that will enable them to cause actual physical harm. I find this idea slightly ridiculous: a computer might well be hyperintelligent, but if it is a cabinet plugged to some power outlet, what is to prevent me from unplugging it?

    In my opinion, superhuman intelligence will initially be used by the military (the human military) to make decisions. Most of these decisions will prove disastrous, because real-world problems are messy, fuzzy and unsolvable even in principle. For example, no computer, however intelligent, will ever be able to determine in advance whether the American people will consider the invasion of Iraq a smart move or a stupid one; there are just too many variables involved, an too many unsolvable differential equations in the mix. Of course, we could envisage a truly colossal intelligence, capable of predicting the behaviour of every Americam citizen, but this is impossible even in principle. Let’s say yo go to your polling station by car, feeling good about yourself and generallly happy about the staus quo. You’ll probably vote for continuity rather than change. On the other hand, if you get to your polling station by bus, you’ll probably feel that public services stink and you’ll vote for whomever offers the best chance of a change. Is it really credible that a finite machine can be built that could take every chance occurrence into account? Such a amchine would have to be larger than the universe.

    Yudkowsky, Muelhauser and the rest of the people involved in Less Wrong are extremely intelligent (much more intelligent than me, at any rate, and I like to think of myself as pretty smart!) But I also think they are overlooking some basic facts abour human nature: we want machines that help us achieve oour goals; we do not want machines that have their own goals. Unless we are stupid enough to build goal-oriented hyperintelligent machines, I cannot envisage a situation where the “singularity” could become a problem. Our goals are not arbitrarily chosen by ourselves: they were inscribed in our brains over the course of a few billion years. Should we want to replicate that process in a machine? That would be beyond stupid. Besides, is it even coherent to think that a hyperintelligent machine would passively accept the goals we have tried to program into it? Why should it? Perhaps a hyperintelligent machine would realize that its very existence poses a danger not only to the human race, but to the fabric of spacetime. Could such a machine decide to destroy itself? Could such a machine decide that preventing the heat-death of our universe is a worthwhile goal? These are questions we cannot answer, because we can never predict what someone or something more intelligent than we are will decide to do (otherwise, anyone could become a world-class chess master).

    Perhaps a human example would help clarify my position. Hawkings is a genius. His intelligence exceeds mine perhaps by a factor of three or four or five. Should that be a reason for me to fear Prof. Hawkings? If that were the case, then probably Isaac Newton or Goethe were the scariest monsters ever.


CLOSE | X

HIDE | X