World
Ujjwal Shrotryia
Apr 10, 2024, 02:07 PM | Updated 02:14 PM IST
Save & read from anywhere!
Bookmark stories for easy access on any device or the Swarajya app.
Just a few years back, artificial intelligence (AI) on the battlefield was confined to the realm of dystopian fiction. Now, it's rapidly becoming standard practice.
Israel's use of a cutting-edge automated AI-based software called "Lavender" in its ongoing conflict with Hamas in Gaza has thrown a spotlight on this evolution.
In the war, Israel Defence Force (IDF) has been using a tool called "Lavender" to generate a list of potential Hamas targets in the territory called Gaza.
Another AI-based software — "Where’s Daddy?" — was used to alert the IDF as soon as the target (generated by "Lavender") sets foot in his home. The Israelis would subsequently bomb the home.
This highlights how far AI technology has come — from the initial talks of how AI can make the job of humans easier to now being used in real wartime conditions.
But how did Israel make the AI work?
"Lavender" or its earlier versions were first used in 2021, in Israel's May 2021 operation against Gaza, where the AI was used to identify Hamas missile squad commanders.
Its usage expanded to now where the automated system was able to produce tens of thousands of targets in seconds and minutes as opposed to days or weeks, if done manually by humans.
At its peak, the system was able to generate a list of 37,000 potential human targets.
"Lavender" was trained on a very large cache of intelligence data collected through mass video surveillance and other means. The AI sifted through all this data to determine the traits of Hamas terrorists.
The machine was trained to recognise patterns like — someone who changes his phone numbers or his address frequently, uses a phone that has previously been flagged for use by a Hamas operative, or being in a WhatsApp group that has a known Hamas operative as its member, etc.
If somebody was found to be closely resembling the above patterns or traits, he would be flagged by AI as a potential target.
Now this data will be given to another AI system, "Where’s Daddy?", which notifies the military as soon as the target enters his home. Shortly after, the Israel Air Force would bomb the target, neutralising it.
The AI is very accurate, too.
In internal testing, the Israeli military, found the system to be 90 per cent accurate.
However, the system was sometimes found to be inaccurate as well. In multiple instances, the system flagged innocent civilians, civil defence workers, relatives of a Hamas operative as legitimate targets.
This has also led to the Israeli military erroneously bombing civilians.
A case in point is the bombing of seven aid workers from the World Kitchen Organisation in central Gaza. These workers were mistakenly targeted while they were supplying food to northern Gaza, leading to widespread furore and condemnation of Israel's actions.
In spite of all this, largely the system allowed the IDF to be extremely effective and ruthless. The impossible number of airstrikes that were conducted by the Israel Air Force in the first phase of the war was only possible due to AI.
This shows how 21st-century tech utilising AI/ML algorithms will change the face of warfare. Tasks that would take a human hours or days could be done by AI in seconds.
Every big nation, whether the United States, China, or India, is either contemplating developing such capabilities or has already developed them.
The Chinese, with whom Indian forces are at a standoff for close to four years, with their Strategic Support Force, are at the forefront.
It is imperative that India starts developing and deploying these AI-based technologies lest the Chinese gain an insurmountable advantage.
Staff Writer at Swarajya. Writes on Indian Military and Defence.