Ideas
Anand Parthasarathy
Dec 05, 2022, 01:36 PM | Updated 01:32 PM IST
Save & read from anywhere!
Bookmark stories for easy access on any device or the Swarajya app.
A proposal by the Police Department of San Francisco, California in the US, to deploy robots capable of deadly force including killing, in extraordinary circumstances, was approved in an 8-3 decision by City Fathers earlier this week.
This has set off a firestorm of views and counter views in the US and outside, about how far technology like Artificial Intelligence (AI), can be allowed to make life-threatening decisions.
The final decision on whether San Francisco would sanction the arming of robots with the ability to kill, will be taken on 6 December. But the Mayor’s already-expressed approval would suggest that barring new pressures, the measure will go through, setting off a new precedent for the use of what are being called ‘killer robots’ in aid of civilian law enforcement.
“This has serious potential for misuse and abuse of this military-grade technology and a zero showing of necessity,” a city supervisor who voted against the proposal is quoted as saying, in a CNN report.
But such use or (misuse, depending on one’s viewpoint) has happened once before in the US: In July 2016, police in Dallas, Texas, used a bomb-disposal robot and armed it with an explosive to kill a suspect after five cops had been killed… the first intentional use of a lethally armed robot in a police situation in that country.
In an editorial, a day after the city supervisors took the first vote, The San Francisco Chronicle said: “There was no expert testimony. No discussion of accountability if an armed robot inadvertently kills a civilian. No assurances, the robots wouldn’t be hacked.”
Fact Mirroring Fiction?
It added: “The very idea of allowing robots to use deadly force brings to mind the dystopian stories told in Robocop, Terminator and Battlestar Galactica.”
The paper was referring to a cult Hollywood science fiction movie of 1987 – remade in 2014 – where a robotic policeman or ‘Robocop’ who is a half human, half machine “cyborg”, launches a brutal campaign to clean up Detroit city at the behest of a corporation.
The third (2004) Arnold Schwarzenegger movie in the Terminator series also featured a computer-driven mechanical giant killer.
Has cinematic fiction anticipated fact? Some scientists think so.
‘Schwarzenegger’s Law’
Prof Toby Walsh of the University of New South Wales in Australia, an AI expert, anticipated this week’s developments even in 2015.
In an article entitled “The rise of the killer robots: why we need to stop them”, he wrote: “You might be thinking of “Terminator” – a robot which, if you believe the movie, will be available in 2029. But the reality is that killer robots will be much simpler to begin with and are, at best, only a few years away.”
He suggests: “Moore’s Law predicts that computer chips double in size every two years. We’re likely to see similar exponential growth with killer robots. I vote to call this Schwarzenegger’s Law.”
Prof Walsh made his dire warning to coincide with campaigns at the United Nations to stop killer robots and similar Lethal Autonomous Weapon Systems or LAWS.
But every attempt at international forums for establishing legally binding rules on machine-operated weapons have failed till now, including at the last such conference exactly a year ago, in December 2021.
While some 68 nations called for some sort of global embargo, the US, Russia and interestingly India have refused to be party to a new treaty banning LAWS.
Indeed, the Indian stand on robotic or autonomous weapons in international forums has been somewhat ambiguous, possibly because the country does not want to rule out the use of the latest class of such platforms: military drones.
National interests have dictated investment in the indigenous development or outright acquisition of “predator” class drones to both carry air-borne weapons and anti-drone countermeasures.
Made-In-India Bomb Disposal Robot
India is also among a small number of nations who have developed their own technology for robotic bomb clearance.
Even 10 years ago the Defence Research and Development Organisation (DRDO) developed the battery-operated remote-controlled robot for bomb disposal — Daksh — mass-manufactured by the public sector Bharat Electronics as well as two private agencies, Dyna Log and Theta Controls.
DRDO has also developed a class of military drone — and indigenous sources including some in the private sector, may eventually obviate the necessity of importing drones capable of delivering lethal packages.
However, these military developments are a far cry from the sort of civilian law enforcement scenarios where lethal robots are being sought to be deployed in the US today.
India’s Position: Humans Before AI
Indeed, the global Human Rights Watch, while detailing the position of all nations on the use of LAWS, in a 2020 study examines India’s stated public stance and quotes Defence Minister Rajnath Singh view that “the final attack decisions should be made by humans in the military, not by artificial intelligence”.
That is likely to be India’s considered position on this contentious issue that is engaging technologists and law enforcement agencies this week: how far machines can be allowed to make decisions for humans.
Delivering lethal force against civilians is "the exact opposite of what we should be using robots for," Paul Scharre, author of Army of None: Autonomous Weapons and the Future of War told the New York Times earlier this week.
Rogue robots taking their own decisions or malfunctioning, while sent on a potentially lethal task, may therefore not be a scenario that need agonise us in India right now.
Not as long as our planners, civil and military, continue to put human control at the epicentre of any new technology, no matter how attractive or effective the fully autonomous option seems.
Anand Parthasarathy is managing director at Online India Tech Pvt Ltd and a veteran IT journalist who has written about the Indian technology landscape for more than 15 years for The Hindu.