In Ray Bradbury’s short story, A Sound of Thunder, the anti-hero, Eckels, is a customer of Time Safari, Inc. (“1. You name the animal. 2. We take you there. 3. You shoot it”). The firm promises to take him back 60 million years to the Jurassic. There, Eckels will get (got?) to shoot a T. Rex. But first, the rep stresses that the animal being shot was about to die anyway. Shooting it won’t change anything. But Eckels must not change anything else, however tiny: say, by treading on a blade of grass or a Jurassic ant. The consequences could be incalculable.
Crushing certain plants could add up infinitesimally. A little error here would multiply in 60 million years, all out of proportion… A dead mouse here makes an insect imbalance there, a population disproportion later, a bad harvest further on, a depression, mass starvation, and finally, a change in social temperament in far-flung countries. Something much more subtle, like that. Perhaps only a soft breath, a whisper, a hair, pollen on the air, such a slight, slight change that unless you looked close you wouldn’t see it. Who knows?
Eckels accidentally crushes a butterfly.
Returning to the now there are signs of a devastating ripple effect. The company logo now reads: “SEFARIS TU ANY YEER EN THE PAST. YU NAIM THE ANIMALL. WEE TAEK YU THAIR. YU SHOOT ITT.” A despotic strongman has just been elected President. And something else has happened, too — something much worse. The story ends: “There was a sound of thunder.”
We all stand to the distant future as Jurassic Eckels stood to now. Anything you do might have momentous consequences in 500 years. Not stopping at a traffic light might mean not hitting a fly; that fly has several million descendants; one of these is mutated, bringing a pandemic to Europe… If you look at it from far enough, everything you do is fraught with the gravest consequence. But what they will be — well, who can say? And from that perspective, our attitude to the distant future can look deeply selfish. The truth is that most people don’t care much about it. Shouldn’t we?
That we should care, much more than we actually do, is the main message of long-termism. According to long-termism, “positively influencing the long-term future is a key moral priority of our time”. Probably the most popular and influential long-termist is Will MacAskill. MacAskill is Professor of Philosophy at Oxford University and director of the Forethought Foundation for Global Priorities Research.
MacAskill is best known for his work on effective altruism. This movement, which until recently has been increasingly popular among the most influential tech entrepreneurs, is based on the idea that there are more and less effective ways to “do good”; so if you are going to do good then you should do it as effectively as possible.
“Until recently”: as most readers will know, things have taken something of a turn in recent days. The cryptocurrency exchange FTX, valued earlier this year at about $30 billion, has suddenly collapsed. Its founder and CEO, Sam Bankman-Fried, is something of an avatar of the movement. In 2012, when Bankman-Fried was still a student at MIT, MacAskill persuaded him that the thing to do, if he really wanted to do good, was to get rich first himself, and then improve the world. Bankman-Fried certainly got rich, for a while. Whether he improved the world, or anything much, is another question.
I have no intention of rushing into judgment on these events, about which I know very little. I do believe that there are sound philosophical objections to long-termism. and indeed to any form of Effective Altruism that entails it. I spell out some of these here. One such objection may be that Effective Altruism, or the ideas behind it, threaten common-sense moral values like integrity and honesty. But it is too soon to say — certainly too soon for me to say — whether it is that particular tension that is now being played out.
In any case, effective altruism plainly did, at least for a while, persuade a lot of important or at least wealthy people. Of course, that doesn’t settle whether it is true. Whether it is true depends on what you mean by “effective”.
MacAskill et al. interpret it broadly in line with utilitarianism, which prescribes “the greatest good for the greatest number”. This in turn can also mean quite a lot of things; depending on what it means, effective altruism might turn out to be demanding in surprising ways. For instance, it may enjoin you to spend your next charitable dollar on malaria charities rather than cancer charities. The best cancer interventions prevent a death caused by cancer for each $19,000 spent, whereas the best malaria interventions prevent a death caused by malaria for each $3,000 spent.
And it may also enjoin you to spend much more attention and also money on the distant future. After all, barring catastrophe the distant future probably contains many more people than the present or near future. So “the greatest good for the greatest number” means prioritising the future, possibly the very distant future, doesn’t it?
Indeed it does, according to long-termism. And MacAskill’s recent book, What We Owe the Future, is an extended defence and application of long-termism. The earlier chapters of the book emphasise just how big the future is, and just how much influence we can have over it, and why all this matters. In later chapters, MacAskill applies long-termism to the analysis of specific threats. These include social collapse and “value lock-in”: the idea that we might adopt, irrevocably, a set of moral values that lead humanity into deep and eternal darkness. (A crucial defence against that, in my view, is unbreakable protection for free speech.) They also include a Terminator-style takeover of humanity by Artificial Intelligence. (A colleague calls it “Attack of the Killer Toasters”.)
Finally, MacAskill tells the reader “what to do”; and here he recapitulates the basic ideas of effective altruism. It turns out that the effective altruist has more career options than you’d expect. The most good that you can do means the most good that you can do; and as Aristotle more or less says, this means matching the needs of the world with the talents and values of the individual. MacAskill’s utopia, or at least the path to it, has a place for software engineers and tennis stars as well as charity workers.
Before returning to long-termism, I should disclose that I taught MacAskill myself, back when he was an undergraduate and I was a junior lecturer. This was many years ago. And I didn’t teach him anything glamorous, like the ethics of the future. For us it was nerd central: philosophical logic, which covers the semantics of conditionals, the nature of reference, and related thrills and spills. But even from that crabby perspective he was clearly brilliant. His tremendous success in the following years didn’t surprise me, though it did give me great pleasure. I did think some of his ideas wrong or questionable. But my attitude to MacAskill was (I imagine) more like Aristotle’s feelings about his star pupil than Obi-Wan Kenobi’s feelings about his.
Anyway, this is what MacAskill says:
“The idea that future people count is common sense. Future people, after all, are people. They will exist. They will have hopes and joys and pains and regrets, just like the rest of us… Should I care whether it’s a week, or a decade or a century from now? No. Harm is harm, whenever it occurs.”
It is worth thinking more about the underlying philosophical attitude. Most of us care more about people who are alive now, and perhaps also their children, than about their descendants four millennia from now. We don’t care about them at all. If MacAskill is right, then that’s a serious mistake. Is it, though? It is a vexed question. Philosophers and economists writing on climate change have discussed it extensively. I was surprised to see relatively little discussion of that literature in What We Owe the Future. Here, though, it’s worth emphasising two points.
The first concerns what is realistic. Throughout history people have, on the whole, cared more about those closer in space and time — their family, their neighbours, their generation. Imagine replacing these natural human concerns with a neutral, abstract care for “humanity in general”. In that world, we would care as much about the unseen, unknown children of the 25th millennium as about our own. That may be admirable to some people — at any rate some philosophers. But it is hardly realistic.
“She was a… diminutive, plump woman, of from forty to fifty, with handsome eyes, though they had a curious habit of seeming to look a long way off. As if…they could see nothing nearer than Africa!” Mrs Jellyby — that eminent Victorian philanthropist who left her own home and family in squalor — was always meant, and has always been taken, as a figure of fun. The same goes for the modern equivalent of Dickensian space-Jellybism. I mean time-Jellybism, which reckons the distant future as important as the present. I don’t expect that to change soon.
None of this is surprising. There is no proof, no argument, that can prove anyone wrong to care about one thing more than another. High-minded philosophers from Plato to Kant have imagined, and blood-soaked tyrants from Robespierre to Pol Pot have enforced, a scientific ethics. But ethics is not a science, although MacAskill’s approach can make it look like one. MacAskill “calculates” value using the “SPC framework”, which assigns numerical values to the significance, persistence and contingency of an event — say, an asteroid impact or a nuclear war — and then plugs these into a formula. The formula tells you how much we should now care about — in practice, how many present dollars we should be spending on — that future contingency.
But really neither maths, nor logic, nor empirical evidence, nor all these things put together, can ever tell you how much to care about anything. There is, as Hume said, a gap between “is” and “ought”. Science, reason, logic, observation, maths — all these tell us how things are; but never how they ought to be. Instead our moral judgments arise, as Hume also said, largely from our sympathetic feelings towards others. Putting it crudely: the root of moral evaluation that seeing a fellow human in pain causes pain to you, and the more vividly you observe it, the stronger the feeling. Joe Gargery is a finer moralist than Mrs Jellyby could ever be.
And is it any wonder that our strongest feelings — and so our deepest moral concerns — are for those that we see most often; those most like us; those who inhabit our time, and not the unknown future victims of unknown future catastrophes? And does anyone seriously expect this to change? As Swift said, you could never reason a man out of something he was never reasoned into. As for the time-Jellybys, who would sacrifice God knows how many people to material poverty today for a 1% shot at an interplanetary, interstellar, intergalactic future in 25000AD — though MacAskill is usually more careful — well, if all they ever get is mockery, they will have got off lightly.
The second point is that it’s hardly obvious, even from a long-term perspective, that we should care more about our descendants in 25000AD — not at the expense of our contemporaries. To see this, we can apply a thought-experiment owed to the Australian philosopher Frank Jackson. Suppose you are a senior policeman controlling a large demonstration. You have a hundred officers. You want to distribute them through the crowd to maximise your chances of spotting and extinguishing trouble. There are two ways to do it — you might call them the “Scatter” plan and the “Sector” plan. The Scatter plan is as follows: each officer keeps an eye on the whole crowd. If he spots a disturbance, he runs off to that part of the crowd to deal with it.
The Sector plan is as follows: each officer surveys and controls one sector of the crowd. If she spots a disturbance in her sector, she deals with it. But she only focuses on her own sector. She doesn’t look for trouble in any other sector. And she won’t deal with trouble outside her sector if it arises. What works better? I have described it so abstractly that you couldn’t say. It depends on the details. But the sector plan might work better. If each policeman is short-sighted and slow, each additional unit of attention might be better focused on problems that she can effectively address (those in her sector) rather than the ones that she can’t.
The analogy is obvious. It might be better for everyone in the crowd if each policeman were to concentrate on what was near in space. And it may be better, for everyone in each future generation, if each generation were to concentrate on what is near to it in time. This means you, your community, your children and grandchildren. Each generation would then enjoy the focused attention and concern of its own and the two preceding generations. On the other scheme, the long-termist one, each one gets marginal attention from all preceding generations. And each of those must also think about thousands of other generations. And most of them are inevitably ignorant about the problems facing this one.
Do we, today, think we would be much better off if the monks and barons of 1215 had spent serious time addressing problems they then expected us to face in 2022? — say, how to control serfs on the Moon, or how bodily resurrection could be possible for a cannibal? (Thomas Aquinas gave serious thought to that one.) No: none of that would have done any good. The people of that time were like the short-sighted policemen — and all the better for it. Magna Carta was signed in 1215, and it remains a beacon to the world. But it came about not through the Barons’ high-minded concern for the future, but through their ruthless focus on the present. About one third of the way through What We Owe the Future, there is a passage that clearly illustrates its utopianism. MacAskill writes about the long reflection: “a stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life, working out what the most flourishing society would be”.
He continues:
“It’s worth spending five minutes to decide where to spend two hours at dinner; it’s worth spending months to choose a profession for the rest of one’s life. But civilization might last millions, billions, or even trillions of years. It would therefore be worth spending many centuries to ensure that we’ve really figured things out before we take irreversible actions like locking in values or spreading across the stars.”
If a 400-year ethics seminar appeals to anyone, then I suppose it would be people like me, who make a living out of that kind of thing. But barring pandemic catatonia, even six or seven decades earnestly discussing Mill, Parfit and Sidgwick will leave most of us pining for anything that could raise the temperature or lower the tone — another Nero, say, or the Chuckle Brothers.
More seriously, there may be nothing to “figure out”. Liberty, justice, equality, social cohesion, material well-being – we all care about these things, but we all weight them differently. There is no right answer — there is nothing to “figure out” — about how best to weight them. As if the upshot of all this discussion would be a final, ideal system, which the statesmen-philosophers of tomorrow could impose on their unwilling subjects with a clear conscience. Not that we shouldn’t discuss these things. On the contrary; but let us not pretend that any ethics — even the “mathematical” ethics of Derek Parfit or Will MacAskill — could ever justify much coercion of anyone, ever.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com/