“The government knows AGI is coming”, the New York Times’s Ezra Klein tells us, “and we’re not prepared in part because it’s not clear what it would mean to prepare”. We’ve all heard of these prognostications by now. On one end of the spectrum are “Doomers” who warn that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die”. On the other end are accelerationists who trust AI to solve problems beyond the reach of human intellects. Might a “cure for ageing” lie in biological patterns that are invisible to us, but discernible to a sufficiently advanced machine-learning algorithm? Might AI vastly outperform human civil servants in devising public policies and administering government services?
Elon Musk and many other AI-industry boosters certainly want us to think so. DOGE is relying on AI not only to identify supposed “waste” and “fraud” in government spending, but also to replace tens of thousands of federal employees and contractors. The assumption is that whatever services they provide can be performed more efficiently by a chatbot trained on government data. A large language model capable of outperforming human civil servants at any cognitive task would amount to something like AGI, an outcome that OpenAI’s Sam Altman has long insisted is reachable simply by feeding more data and computing power into large language models.
The question for the rest of us is how to make rational choices in the face of such hype. The claims being made about AI’s potential are examples of speculative futurism, an increasingly lucrative and culturally influential form of prognostication that capitalises on what we, in How to Think About Progress, call the “horizon bias”: Our cultural propensity to systematically overestimate the proximity of technologically driven outcomes. Although Altman’s promise of AGI, like Musk’s even older promises of self-driving vehicles, has repeatedly been postponed, our technology-obsessed society is primed to buy into such promises, and the least scrupulous among us are prepared to profit from our credulity.
If you are on a long march and you can see your destination off in the distance, it is natural to assume that your journey is almost at an end. But as Frodo’s experience in Mordor shows, the last mile can be far more difficult and roundabout than you expected. The horizon bias becomes particularly potent when we are presented with a seemingly clear sequence of steps from our present reality to some speculative future scenario. By telling ourselves exactly what it will take to get from Point A to Point B, we create a mental model of change that inevitably includes discrepancies with the world as it is.
Consider how easy it is to believe that the cure for cancer is imminent every time there is some new technological breakthrough. “Hey ChatGPT, what’s the cure for cancer?”, mused the Future Today Institute (recently rebranded as the Future Today Strategy Group), a corporate “advisory firm specializing in strategic foresight,” in a tweet last year. While politicians, scientists, and technologists have been promising “the cure” ever since Richard Nixon launched the War on Cancer in the Seventies, it has never materialised. Yet, we remain eager for the next story about an imminent cure because we have absorbed the modern mythology of ourselves as toolmaking masters over nature. For a society that has been to the moon and eradicated numerous other diseases, surely a solution to unregulated cell growth cannot be far off — right?
Yet even in an “AI renaissance” where machines can analyse oncology data in ways that humans never could, we will still be dealing with the messy complexities of human biology. Every human body is unique (as is every tumour). Moreover, if AI becomes capable of entertaining beliefs about its own capacities and future possibilities for itself, it, too, will be prone to horizon bias, stumbling over unexpected gaps between the real world and the simplified models that will guide its recommendations.
This is not to suggest that either the utopian or dystopian vision of AI is impossible. But it is to question the value of the speculative futurism industry that has come to dominate our collective expectations. As corporate consultants, professional futurists make good money catering to businesses’ fears of uncertainty by offering apparently scientific “strategic foresight” on any subject for which there is a paying subscriber. It is in their interest to present anticipations about the future in ways that seem closer to knowledge than mere opinion.
Look back three years to the Future Today Institute’s 2022 Tech Trends Report, for example, and you will find a bold prediction that “synthetic biology will make ageing a treatable pathology”. Yet since the report prudently avoids offering any timeline — for when ageing will become a “treatable pathology” — it becomes difficult to test the validity or at least the precision of the claim. Nor is it easy to assess the organisation’s complete track record. When asked about its earlier publications, a spokesperson replies, “Unfortunately we no longer shelve our past reports. Have a nice day.”
Such drab commercialisation was a long time coming. Various “futures studies” theories and methodologies have been formalised, frameworks for assessing “foresight competency” have been introduced, and futurists have increasingly adopted a shared body of jargon. Hence, the mid-20th century futurist Bertrand de Jouvenel graced us with the term “futurible,” meaning any “future state of affairs” whose realisation “from the present state of affairs is plausible and imaginable”. Most futurists would say that they “don’t make predictions”, and yet this obviously comes with the territory, especially when there is demand from paying customers. If you cannot create the impression that you are better than others at forecasting future probabilities, you have no competitive advantage.
The 20th-century bibliographer I.F. Clarke traces the roots of modern futurism as far back as the 13th century, when the mediaeval monk Roger Bacon foresaw that the deepening of scientific knowledge could lead eventually to self-propelled planes, trains, and automobiles — as indeed it did, though not nearly as soon as he expected. Such thinking was novel for the time, and it would remain in the cloisters for another three centuries, when the Enlightenment saw books like Sebastien Mercier’s 1771 utopian novel, L’An 2440 (The year 2440).
Channelling his era’s faith in technology-driven progress, Mercier described a future of peace and social harmony, governed by philosopher-kings. In his envisioned 25th century, slavery has been abolished, the criminal justice system reformed, and medicine subjected to science-based rationality. But he also anticipated the territory of North America being returned to its original inhabitants, and he thought that Portugal might become a part of the United Kingdom. In his future, taxes, standing armies, and even coffee have all been abolished. Had he been a corporate consultant, it is unclear whether his clients would have been better prepared for various future scenarios than their competitors.
With numerous editions and translations appearing in the decades after it first appeared, Mercier’s work of speculative prognostication was a wild commercial success. From then on, each generation brought a new host of what Clarke calls “professional horizon-watchers”. Technological innovation had made predictions common, and though earlier practitioners’ techniques were nowhere close to as sophisticated as those used by futurists today, their basic method was the same: by extrapolating from the latest breakthroughs, they envisioned new realms of plausibility.
According to H.G. Wells, in his 1902 lecture, “The Discovery of the Future”, “in absolute fact the future is just as fixed and determinate, just as settled and inevitable, just as possible a matter of knowledge as the past.” With the arrival of the kind of total wars that Wells had, to his credit, anticipated, projects to foresee the future were taken up in earnest. The upheavals of the first half of the 20th century created an urgent demand for technocratic planning, giving rise to “operations research” and, with it, the modern think tank (epitomised by the Rand corporation).
In 1968, the Palo Alto-based Institute for the Future emerged as the first self-identified futurist institution of its kind. Then, in his 1970 bestseller Future Shock, Alvin Toffler offered a “broad new theory of adaptation” for an age of accelerating technological, social, political, and psychological change. Inspired by the more well-known concept of culture shock (the experience one feels upon suddenly arriving in an alien social environment), Toffler coined his titular term to describe the psychological distress that comes with rapid, monumental change. One of the best ways to cope, he believed, was to adopt more of a future-oriented perspective, so that we are not constantly caught off guard by each new society-altering trend or development.
In the half-century since Future Shock appeared, the widespread sense of constant, rapid change has only deepened. But rather than being shocked by it, we now regard acceleration as a central part of modern life. Everyone assumes that each passing year will bring faster, cheaper, sleeker, and more powerful technologies. Not a week goes by without headlines about new breakthroughs in AI, biomedical research, nuclear fusion, and other promising vistas of progress on the horizon.
This can cause real problems in practice. In Imaginable: How to See the Future Coming and Feel Ready for Anything — Even Things That Seem Impossible Today, Jane McGonigal of the Institute of the Future argues that everyone should train their minds to think more like a futurist. “The purpose of looking ten years ahead isn’t to see that everything will happen on that timeline”, she writes, “but there is ample evidence that almost anything could happen on that timeline.”
By leading us to consider underappreciated or underestimated risks that may lie ahead, this is sound advice. And yet, the same methods also encourage us to overestimate the likelihood of positive breakthroughs and possibilities. As McGonigal herself concedes, an ample body of research in psychology finds that “imagining a possible event in vivid, realistic detail convinces us that the event is more likely to actually happen”. The futurist methodology rests on a foundation of radical open-mindedness, even wilful gullibility.
According to “Dator’s Law” (coined by the futurist Jim Dator), a fundamental principle of today’s futurist methodology, “Any useful statement about the future should at first seem ridiculous.” McGonigal thus asks us to consider the statement: “The sun rises in the east and sets in the west every day.” This could become technically true if humans travelled to Mars, where sunrises and sunsets wouldn’t happen “every day — at least, not by our standard definition of a “day” on Earth”. As “evidence” of this possibility, she cites the fact that “there are plenty of space entrepreneurs trying to develop the technology to help humans settle on Mars as soon as possible”.
Yet, surely, claims made by entrepreneurs promising to send humans to Mars aren’t really evidence at all. Musk has been promising his missions to the red planet for years, only to keep moving back the target date (from 2022 to 2024 to 2026 to 2028). He and others making similar commitments have a financial interest in creating the impression that exceedingly difficult feats are eminently plausible and thus investable. It is little wonder that the futurist discipline and the tech industry are so closely intertwined. All are in the business of selling a specific vision of what lies ahead – of capitalising on the FOMO that afflicts everyone who didn’t buy Nvidia stock in 2022. Rarely do we pause to consider what elements of the picture are intended to be self-fulfilling prophecies, or what alternative possibilities are being left out entirely.
By now, the con should be obvious. If Altman truly believes that AGI will render market capitalism as we know it obsolete, as he recently mused, why does he care about the competitive challenge from DeepSeek? Why is OpenAI rushing out new reasoning models that expert observers suggest have not been “adequately tested”?
While a well-meaning educator like McGonigal wants us all to be “ready to believe that almost anything can be different in the future”, there are many more in Silicon Valley who stand to gain from having a public that is ready to believe anything — be it dubious entrepreneurs or their fellow travellers in the corporate consultancy business. Speculative futurism — and our cultural obsession with its offerings — is a boon for those seeking more funding or support for glitzy projects like ending ageing, colonising Mars, or creating superintelligence. But every dollar invested in these questionably feasible pursuits is a dollar not going to support education, public health, and other more immediate “boring” needs.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com/