The Ethical System Followed by the Tech Lords

The Ethical System Followed by the Tech Lords December 5, 2022

Those who do not believe in God or in moral absolutes must come up with their own moral principles and give them a different basis.  There is a new system of ethics that is winning over many of our cultural elite and is especially popular among tech entrepreneurs, such as Elon Musk and Sam Bankman-Fried.  It goes by the rather inelegant name of “longtermism.”

In the words of the quite helpful Wikipedia article on the subject, “Longtermism is an ethical stance which gives priority to improving the long-term future.”

We can perhaps best understand longtermism and its popularity among technologists with the help of Jacques Ellul, the subject of our last post.  It is essentially an application of “technique” applied to ethics.  The technical mindset, says Ellul, makes everything the means to an end.  For longtermists, the end is the humanity of the future, often imagined as a technological “eutopia” in which the problems of the past have been solved, human beings spread throughout outer space, they merge with the digitalized artificial intelligence that they have made, and they will live, in the words of one the philosophy’s theorists, a life of  “surpassing bliss and delight.”

Whatever will help us reach this future, however distant it might be,  is good.  Whatever might prevent it from happening is bad. But our actions here and now will contribute either to reaching this goal or preventing it from happening.

Our moral tasks now are, first of all, to alleviate “existential threats.”  That is, developments that threaten the very existence of the human race.  The first priority must be to ensure the survival of humanity, to prevent our extinction.

So longtermists are worried about climate apocalypse, pandemics, asteroid strikes, and social collapse.  A major preoccupation of longtermists today is the prospect of artificial intelligence growing to such a rate that our linked computers will attain consciousness and agency, acquiring near-infinite knowledge and capacity, to the point that this technological superbeing might exterminate the biological entities that created it.

In addition to finding ways to avoid existential threats, longtermists also seek to shape the “trajectory” of the future.  That is, to assure not only the existence of future beings but to assure the quality of their lives, so as to attain the “eutopia” of maximum good.  This can be achieved by changing and influencing social values and directing them to these futurist ends.

Longtermism is essentially a version of utilitarian ethics, the notion that what is good is what is useful, or that provides the greatest good for the greatest number.  But surely this is only theoretical, since few of us can have much impact on whatever the future is going to bring.

But this is where longtermism joins with a movement in philanthropy called “effective altruism.”  This means directing charitable giving to what will provide for the greatest good for the greatest number.  And for billionaire tech entrepreneurs who believe in longtermism, this means devoting millions and millions of dollars towards helping the future.

Elon Musk has said that  longtermism is “a close match for my philosophy.”  He has developed the electric car to stave off climate catastrophe.  He has developed SpaceX to help human beings travel in outer space.  He is promoting the colonization of Mars.  He is concerned about the potential dangers of an Artificial Superintelligence and is experimenting with implanting computer chips into human brains so that humans can tap into that intelligence themselves.

But perhaps Silicon Valley’s most prominent and enthusiastic advocate of longtermism has been Sam Bankman-Fried, the 30-year-old crypto-currency tycoon, whose FTX corporation has collapsed, bringing much of the crypto-currency world with it.  Bankman-Fried has said that the purpose of his life is to become rich so that he can give the money away, “to maximize every cent I can and aggregate net happiness in the world.”  He financed many of the longtermist thinkers and institutions, and, with his interest in “trajectory,” became the second biggest donor to the Democratic Party, just after George Soros.  He established the political action committee Protect Our Future to give money to “lawmakers who play the long game on policymaking in areas like pandemic preparedness and planning.”

Émile P. Torres has written an essay entitled “What the Sam Bankman-Fried debacle can teach us about ‘longtermism'”arguing that the business practices that led to the collapse of FTX, which ruined countless investors, demonstrates the weakness of this ethical system.

In that essay, which gives a vivid account of the movement, Torres shows the devastating consequences of every brand of utilitarianism, in which the end always justifies the means.  He gives examples of longtermist thought experiments consisting of scary scenarios about pre-emptive violence and global surveillance systems that might be necessary to stop “existential threats.”  Similarly, Torres argues, Bankman-Fried’s high-minded idealism allowed him to cheat his investors in the here and now.

Longtermism and Effective Altruism neglect immediate needs, no matter how pressing, in the name of a future that does not exist and can never be known.  Torres cites a philosopher in this movement who argues that rich societies should be helped instead of poor societies, since the rich will contribute more to future civilization, while the poor dying out will mean little in the grand scheme of things.  He quotes Bankman-Fried telling an interviewer that he wasn’t interested in helping the global poor, that his “all-in commitment” is to longtermism.

Meanwhile, policy makers in the U.S. State Department and the European Union are being influenced by longtermism.  Read this article about the impact of lobbyists from the Future of Life Institute.

Human beings may not be righteous, but they want to be self-righteous.  One way to achieve that is to push out their morality away from their personal lives into abstract causes and distant goals.  Torres quotes a longtermist philosopher:

If you crunch the numbers [calculating the net total of human happiness for the greatest number], the better thing to do would be to focus on all these future people, not those struggling to survive today. What matters most, Beckstead argues, is that we focus on the trajectory of civilization over “the coming millions, billions, and trillions of years.”

But living in the future is even more foolish than living in the past.  The past has existed, but the future exists only in our imaginations.  The true moral task is to live in integrity in the present in the way we treat our actual, tangible neighbors who come into our lives.

"No way. We simply do not learn the Constitution with a focus on Article numbers. ..."

DISCUSSION: Blue Laws Reconsidered?
"Except I would expect some one who is a judge, (rather than a mere lawyer, ..."

DISCUSSION: Blue Laws Reconsidered?
"As always Seinfeld nails it!"

DISCUSSION: Blue Laws Reconsidered?
"Actually these questions were cheap shots. As someone who studied Constitutional Law and is familiar ..."

DISCUSSION: Blue Laws Reconsidered?

Browse Our Archives

Close Ad