Around 2014, I started to notice that something was up in academic philosophy. Geeky researchers from fancy universities, having first made their names in abstract and technical domains such as metaphysics, were now recreating themselves as public-facing ethicists. Knowing some of the personalities as I did, I found this pivot amusing. If the ideal ethicist has delicate social awareness, a rich experience of life, lots of empathy, and well-developed epistemic humility, these people had none of those things.
What they did have was a strong career incentive to produce quirky arguments in favour of the progressive norms emerging at the time, an advanced capacity to handle abstraction and technicality, and huge intellectual confidence. In real life, these would be the last people any sane individual would trust with a moral dilemma. Luckily for the outside world, they tended to have little influence, mainly because nobody could understand what the hell they were talking about.
The same cannot be said for the philosopher geeks in charge of the hugely popular and influential Effective Altruism (EA) movement, which was given new vim last week with the publication of a new book by one of its leading lights, 35-year-old William MacAskill, accompanied by a slew of interviews and puff pieces. An Associate Professor at Oxford, MacAskill apparently still lives like a student, giving away at least a tenth of his income, living in a shared house, wild swimming in freezing lakes, and eating vegan microwave meals. (Student life isn’t what it used to be.)
But his influence is huge, as is that of EA. Beloved of robotic tech bros everywhere with spare millions and allegedly twinging consciences, EA and offshoot affiliate organisations such as GiveWell, 80,000 Hours, and Giving What We Can aim to apply strictly rational methods to moral action in order to maximise the positive value of outcomes for everyone. Unlike many metaphysicians-turned-ethicists, MacAskill sells this in a style that is comprehensible, even attractive, to civilians — and especially to those with a lot of dosh to give away. Quite frankly, this worries me a bit.
The background to EA is austerely consequentialist: ultimately, the only thing that counts morally is maximising subjective wellbeing and minimising suffering, at scale. Compared to better potential outcomes, you are as much on the hook for what you fail to do as for what you do, and there is no real excuse for prioritising your own life, loved ones, or personal commitments over those of complete strangers. MacAskill’s new book, What We Owe The Future: A Million Year View, extends this approach to the generations of humans as yet unborn. As he puts it: “Impartially considered, future people should count for no less, morally, than the present generation.” This project of saving humanity’s future is dubbed “longtermism”, and it is championed by the lavishly-funded Future of Humanity Institute (FHI) at Oxford University, of which MacAskill is an affiliate.
Longtermism is an unashamedly nerdy endeavour, implicitly framed as a superhero quest that skinny, specky, brainy philosophers in Oxford are best-placed to pursue — albeit by logic-chopping not karate chopping. The probability, severity, and tractability of threats such as artificial intelligence, nuclear war, the bio-engineering of pathogens, and climate change are bloodlessly assessed by MacAskill. As is traditional for the genre, the book also contains quite a few quirky and surprising moral imperatives. For instance: assuming we can give them happy lives, we have a duty to have more children; and we should also explore the possibility of “space settlement” in order to house them all.
In this matter, MacAskill seems to be in line with his colleagues at FHI: a 2013 profile of the institute describes how the consensus then was that “the Milky Way galaxy could be colonised in less than a million years, assuming we are able to invent fast-flying interstellar probes that can make copies of themselves out of raw materials harvested from alien worlds”. (The potential of space settlement to solve the current housing crisis appears sadly unaddressed.)
In his previous book, MacAskill described EA as about asking: “How can I make the biggest difference I can?” and using evidence and careful reasoning to try to find an answer. It takes a scientific approach to doing good.” True to form, What We Owe The Future is full of graphs, tables and graphics, and — as with the EA movement generally — the first impression is of something rigorously scientific, free from all those pesky emotional biases that might otherwise hamper rational thought.
Given this emphasis, it is unsurprising that both EA and longtermism seem to have captured the imaginations of the sort of tech-bro entrepreneur fascinated by the possibility of freezing his own head. Bill Gates has endorsed MacAskill as a “data nerd after my own heart”. Elon Musk already funds the FHI, and described MacAskill’s book as “a close match for my philosophy”. Billionaire crypto-trader Sam Bankman-Fried has appointed MacAskill as an advisor to his organisation The FTX Future Fund, dedicated to funding “ambitious projects to improve humanity’s long-term prospects”.
Talking to the New Yorker recently about MacAskill, Bankman-Fried disparaged “neartermist causes” such as “global health and poverty” as “more emotionally driven”, and described himself as never having had a “bednets phase” (referring to EA’s earlier focus on targeting malaria). “I did feel like the longtermist argument was very compelling. I couldn’t refute it. It was clearly the right thing.” Psychopathic as this sounds, it is entirely in keeping with the logic of longtermism: only unreliable and partial emotion could lead you to care more about the lives of real people over those who are yet to exist.
Yet of course EA is not actually science as we know it. If an impartial scientific worldview were really in play, at best we’d end up with the conclusion that altruism towards immediate conspecifics is an adaptation that has helped the human species survive, alongside things like language and opposable thumbs. At worst, we’d end up with the idea that it is disguised self-interest. Either way, there would be a strong argument for the absence of any quantifiable, verifiable moral facts about goodness and badness. The scientific worldview, strictly applied, leads to a separation of facts and value, and the demotion of value to a form of psychological projection. It certainly doesn’t lead to any moral concern about what to do with respect to humans that don’t exist yet, or of the idea that we could accurately detect, quantify, and then rank something as ephemeral as human well-being.
And EA isn’t emotion-free, either. In these polarised times, it is tempting to contrast EA with that other wildly successful moral movement of our time, woke progressivism. At face value, while woke seems to be all heart — about oozing empathy for certain minority groups at the expense of more hard-headed and complicated concerns — EA seems to be all head. Actually, though, the two have more in common than you’d think.
Throughout his book, MacAskill covertly uses emotive thought experiments to push the reader’s moral intuitions in the way he wishes them to go. At one point, for instance, the reader is asked to imagine leaving a broken bottle on a pathway, so that a child then cuts herself on the shards. We are then supposed to extend these consequent feelings of guilt and revulsion through time to unborn strangers. Future people are talked about as if they were distressingly disabled by their own non-existence: as “disempowered members of society” who “can’t made their views heard directly” and are “utterly disenfranchised”. MacAskill exhorts us, as if from the pulpit: “Imagine what future people would think, looking back at us debating such questions. They would see some of us arguing that future people don’t matter. But they look down at their hands; they look around at their lives. What is different?”
Effectively, both longtermism and woke progressivism take a highly restricted number of emotional impulses many of us ordinarily have, and then vividly conjure up heart-rending scenarios of supposed harm in order to prime our malleable intuitions in the desired direction. Each insists that we then extend these impulses quasi-rigorously, past any possible relevance to our own personal lives. According to longtermists, if you are the sort of person who, naturally enough, tries to minimise risks to your unborn children, cares about future grandchildren, or worries more about unlikely personal disasters rather than likely inconveniences, then you should impersonalise these impulses and radically scale them up to humanity as a whole. According to the woke, if you think kindness and inclusion are important, you should seek to pursue these attitudes mechanically, not just within institutions, but also in sports teams, in sexual choices, and even in your application of the categories of the human biological sexes.
In both cases, our original and admirable impulses eventually get buried under so much impersonality that they are obliterated, or at least distorted beyond recognition. And so it is that we find longtermists such as Bankman-Fried worrying more about future non-existents than humans suffering today; or progressives trying to get people removed from their jobs in the name of kindness.
What is perhaps particularly scary about the longtermists, as opposed to the other lot, is not that they are driven by emotion, but that they don’t know they are. And what is perhaps scary about humanity generally is that, in our perennial attraction to movements like EA, longtermism, and woke progressivism — and to the gurus within them — we seem so bent on fooling ourselves into thinking that the ethical world is relatively one-dimensional and hackable. Grand-scale fantasies of saving the world are easy. Personal relationships are hard. Ethics is an art not a science — and, in my experience, people with PhDs are probably not the most reliable guides to it.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com/