October 26, 2024 marks the 40th anniversary of director James Cameron’s science fiction classic, – a film that popularised society’s fear of machines that can’t be reasoned with, and that “absolutely will not stop … until you are dead”, as one character memorably puts it.
The plot concerns a super-intelligent AI system called which has taken over the world by initiating nuclear war. Amid the resulting devastation, human survivors stage a successful fightback under the leadership of the charismatic John Connor.
In response, Skynet sends a cyborg assassin (played by Arnold Schwarzenegger) back in time to 1984 – before Connor’s birth – to kill his future mother, Sarah. Such is John Connor’s importance to the war that Skynet banks on erasing him from history to preserve its existence.
Today, public interest in artificial intelligence has arguably never been greater. The companies developing AI typically promise their technologies will perform tasks faster and more accurately than people. They claim AI can spot patterns in data that are not obvious, enhancing human decision-making. There is a widespread perception that AI is poised to transform everything from to the .
Immediate risks include introducing biases into algorithms for screening job applications and the threat of generative AI , such as software programming.
But it is the existential danger that often dominates public discussion – and the six Terminator films have exerted an on . Indeed, , the films’ portrayal of the threat posed by AI-controlled machines distracts from the substantial benefits offered by the technology.
The Terminator was not the first film to tackle AI’s potential dangers. There are parallels between Skynet and the supercomputer in Stanley Kubrick’s 1968 film, 2001: A Space Odyssey.
It also draws from Mary Shelley’s 1818 novel, , and Karel Čapek’s 1921 play, . Both stories concern inventors losing control over their creations.
On release, it was described in as a “B-movie with flair”. In the intervening years, it has been recognised as one of the greatest science fiction movies of all time. At the box office, it made more than 12 times its (£4.9 million at today’s exchange rate).
What was arguably most novel about The Terminator is how it re-imagined longstanding fears of a through the cultural prism of 1980s America. Much like the 1983 film , where a teenager nearly triggers World War 3 by hacking into a military supercomputer, Skynet highlights cold war fears of nuclear annihilation coupled about rapid technological change.
Forty years on, Elon Musk is among the technology leaders who have helped keep a focus on the supposed of AI to humanity. The owner of X (formerly Twitter) has while expressing concerns about the hypothetical development of superintelligent AI.
But such comparisons often irritate the technology’s advocates. As the former UK technology minister Paul Scully in 2023: “If you’re only talking about the end of humanity because of some rogue, Terminator-style scenario, you’re going to miss out on all of the good that AI [can do].”
That’s not to say there aren’t genuine concerns about military uses of AI – ones that may even seem to parallel the film franchise.
AI-controlled weapons systems
To the relief of many, US officials have said that AI will on deploying nuclear weapons. But combining AI with autonomous weapons systems .
These weapons have existed for decades and don’t necessarily require AI. Once activated, they can select and attack targets . In 2016, US Air Force general Paul Selva coined the term to describe the ethical and legal challenges posed by these weapons.
Stuart Russell, a leading UK computer scientist, has on all lethal, fully autonomous weapons, including those with AI. The main risk, he argues, is not from a sentient Skynet-style system going rogue, but how well autonomous weapons , killing with superhuman accuracy.
Russell envisages a scenario where tiny quadcopters equipped with AI and explosive charges could be mass-produced. These could then be deployed in swarms as “cheap, selective weapons of mass destruction”.
Countries including the US for human operators to “exercise appropriate levels of human judgment over the use of force” when operating autonomous weapon systems. In some instances, operators can visually verify targets before authorising strikes, and can “wave off” attacks if situations change.
AI is already being used to . According to some, it’s even a of the technology, since it could reduce . This idea evokes Schwarzenegger’s role reversal as the benevolent in the original film’s sequel, Terminator 2: Judgment Day.
However, AI could also undermine the role human drone operators play in challenging recommendations by machines. Some researchers think that to trust whatever computers say.
‘Loitering munitions’
Militaries engaged in conflicts are increasingly making use of small, cheap aerial drones that can detect and crash into targets. These “loitering munitions” (so named because they are designed to hover over a battlefield) feature varying degrees of autonomy.
As I’ve argued in co-authored with security researcher Ingvild Bode, the dynamics of the Ukraine war and other recent conflicts in which these munitions have been widely used raises concerns about the quality of control exerted by human operators.
armed with weapons and designed for might call to mind the relentless Terminators, and weaponised aerial drones may, in time, come to resemble the franchise’s airborne “hunter-killers”. But these technologies don’t hate us as Skynet does, and neither are they .
However, it’s crucially important that human operators continue to exercise agency and meaningful control over machine systems.
Arguably, The Terminator’s greatest legacy has been to distort how we collectively think and speak about AI. This matters now more than ever, because of how central these technologies have become to the for global power and influence between the US, China and Russia.
The entire international community, from superpowers such as China and the US to smaller countries, needs to find the political will to cooperate – and to manage the ethical and legal challenges posed by the military applications of AI during this time of geopolitical upheaval. How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator – even if we don’t see time travelling cyborgs any time soon.