In the summer of 2020, the Afghan military received an unusual report. Transmitted by their US allies, it warned of a possible Taliban attack in Jalalabad, a city in the fertile country’s southeastern plain. Suggesting the assault would come between 1-12 July, it identified particular locations at risk of attack. More than that, the report predicted the Taliban onslaught would come at the cost of 41 lives, with a “confidence interval” of 95%.
During its bitter fight against the militants, the Afghan government must have received thousands of such reports. What made this one so special was its provenance: not the drones and informants of its friends in the world’s greatest superpower, but rather Raven Sentry, an AI-enabled warning model designed to predict insurgent activity.
Developed in 2019, while US negotiations with the Taliban were still underway, Raven Sentry was built to maintain situational awareness in Afghanistan after the final withdrawal of foreign troops from the country. “We were looking for ways to become more efficient and to maintain situational awareness”, says Colonel Thomas Spahr, a professor at the US Army War College, adding that Raven Sentry would “enable” the Afghans to continue the fight after Nato had flown home.
The details are classified, but Raven Sentry apparently proved successful in Jalalabad, even as it stymied several other attacks as well. In the end, though, the programme was terminated abruptly, about the same time as democratic rule in Afghanistan, amid the chaos, fear and bloodshed of Kabul International Airport. Yet what Raven Sentry achieved that day in July 2020 could yet transform warfare — if, that is, the technical and ethical hurdles don’t prove too high.
Militaries have experimented with AI intelligence for a while. As far back as 2017, the US launched something called Project Maven to help analysts process large amounts of data. Yet if Maven relied on sophisticated object-recognition software, Spahr equally stresses that human officers remained “central” to the process.
Raven Sentry was different. Gathering together a range of data — social media messages, news stories, significant anniversaries and even weather reports — it could then predict places at risk of insurgent attack. “Neutral, friendly, and enemy activity anomalies triggered a warning,” Spahr explains. “For example, reports of political or Afghan military gatherings that might be terrorist targets would focus the system’s attention.”
Despite America’s eventual failure in Central Asia, meanwhile, Afghanistan proved an ideal testing ground. That’s essentially down to what Rafael Moiseev calls Afghanistan’s data-rich environment. “AI is only ever as good as the data that trains it,” explains the AI expert. As Moiseev continues, Nato’s 20-year odyssey in Afghanistan meant there was plenty to go on, from historical attack data to anecdotal evidence from Spahr and his colleagues. The graveyard of empires even offered lessons from the Cold War, with Raven scooping up content from the Soviet occupation in the Eighties.
Just as important, Raven sharpened its predictions over time. Like other algorithms, it could first scour unclassified databases, before honing in on what mattered — useful when so-called “OSINT” data is now a global market worth $8 billion.
All told, this comprehensive approach proved successful. As Spahr says, Raven spotted some 41 insurgent attacks across five Afghan provinces before they actually happened, usually offering warnings of about 48 hours. And by October 2020, less than a year before Raven would abruptly be wrapped up, analysts had determined it was firing out predictions with 70% accuracy, even if humans were crucial to its success too.
Raven Sentry is only one example of how AI has transformed war this century. Aside from being a predictive analytical tool, after all, AI technologies also boast what Polly Scully calls effective and ethical applications elsewhere. “In the broadest sense,” explains Scully, who heads Palantir’s defence and national security work in the UK, “it has the power to dramatically lower the technical proficiency required for personnel to engage with large amounts of data in sophisticated ways, in order to make better decisions.”
To explain what she means in practice, Scully refers to the use of AI in improving battlefield awareness, something Palantir’s been working on. She notes that algorithms can analyse how appropriate particular aircraft — and their bombs — are for striking targets. “AI,” she adds, “has the potential to transform logistics too.” Among other things, it can keep artillerymen informed about how quickly gun barrels are wearing out. From there, Scully adds, the computers can tell manufacturers to build spare parts.
Examined from the other end of the barrel, Moiseev says that AI can cut casualties by enabling the deployment of autonomous machines on the frontline, with actual troops safely orchestrating the battle from the rear. This isn’t some Terminator fantasy either. Earlier this year, the Ukrainian military deployed about 30 “robot dogs” against its Russian foe. Though not totally autonomous, the so-called Brit Alliance Dog (BAD2) can explore trenches, ruined buildings and other areas drones struggle to access.
The Ukrainians have also experimented with autonomous machine guns, as well as drones that use AI to identify and attack enemy targets. In the Middle East, meanwhile, and in an echo of Raven, it was recently revealed that Israel used an AI programme called Lavender to designate close to 37,000 Palestinians as Hamas targets.
Yet as the catastrophic civilian casualties wrought by systems like Lavender imply, battlefield efficiency and wartime morality are two very different things. Geoffrey Hinton, the so-called “Godfather of AI” and winner of the Nobel Prize for Physics, warned about the “possible bad consequences” of the technology — noting especially that robotic killers may one day move beyond our control.
It hardly helps, of course, that some of AI’s most ardent enthusiasts are arguably less than perfect. Though Scully unsurprisingly emphasises the ethics of Palantir’s platforms, her company has faced scrutiny over how it collects and uses data, while also apparently inviting young children to an AI warfare conference. That’s before you consider its murky relationship with the US government, with Palantir also causing controversy over its work for the NHS here in Britain.
Yet these challenges aside, Moiseev is ultimately confident that few people want to see society torn apart in a future ruled by killer robots. “Rather,” he suggests, “we should be developing AI to prevent and resolve disagreements.” In a broader sense, meanwhile, predictive AI can be used to not only foil attacks, but also respond to conflict and help civilians. Whatever the question marks around Palantir, it is currently using AI to help de-mine over 150,000 square kilometres of Ukrainian fields.
And what about that future hinted at Jalalabad? Could AI predict some future conflict? Moiseev thinks so. As he says, though the invasion of Ukraine came as a shock to most, a team of scientists and engineers based in Silicon Valley had already predicted Russia’s move almost to the day — even months before the war actually began. “There is often a wealth of signs that a conflict is on the horizon,” Moiseev adds, “whether unusual movement at missile sites or a sudden stockpiling of critical materials. The problem is that humans aren’t very good at spotting subtle clues. But for AI, that is one of its greatest strengths.”
No wonder US decision makers are hoping to use AI to analyse data to spot any future Chinese actions around Taiwan. Certainly, Admiral Samuel Paparo, a US Pacific Fleet commander, has implied as much. As he recently told a defence innovation conference, the Pentagon is looking for ways to “find those indications” of an imminent assault by the People’s LIberation Army. Given, moreover, that any eruption in the Pacific could occur without warning, experts have argued that AI could equally improve overall the general readiness of US forces year round.
Then there’s the question of whether the computers could be outwitted, by some enemy eager to retain a modicum of surprise. Tellingly, this may happen via even smarter machines, potentially millions of times more powerful than regular supercomputers. Quantum machines could analyse enemy movements in a second, smashing through their encryption.
It would be reckless, though, to let computers take complete control. As Spahr says, war is ultimately fought by men and women, meaning we can never allow an “automation bias” to cloud our strategic judgement. Given how his country’s adventure in Kabul ultimately ended, that’s surely sound advice.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com/