News Brief
Sanjeev Sanyal, member of Economic Advisory Council to Prime Minister Narendra Modi.
Sanjeev Sanyal, a member of the Economic Advisory Council to the Prime Minister (EAC-PM) has called for a distinct regulatory framework to address AI's complex adaptive characteristics.
According to Sanyal, this framework should incorporate "defined limits, continuous oversight, directed progression, and joint governance".
"The objective isn't to regulate every aspect of AI's trajectory over the long term. Instead, the focus should be on establishing firm boundaries and partitions, monitoring systems, and adaptive feedback processes that can adjust the course of AI as it evolves in ways that were not originally foreseen," Sanyal said in a column co-written with EAC-PM consultant Chirag Dudani.
Sanyal noted that as AI technology rapidly advances, it brings with it a host of societal benefits but also significant risks including systemic breakdowns, loss of privacy, and the potential for AI to act in ways that are not aligned with human welfare, such as in the case of "runaway AI".
Such an AI's integration into critical infrastructure could lead to catastrophic disruptions, and its use in surveillance could result in unprecedented invasions of privacy.
"Integrated into critical infrastructure, compromised AI could also wreak havoc by disrupting utilities such as power grids and telecommunications. Malevolent systems could hack interconnected grids, causing cascading failures. Further risks arise from AI-driven cyberattacks compromising national security systems, or autonomous weapons unleashed in warfare," the EAC-PM member said.
"Unchecked surveillance presents threats of AI continuously monitoring individuals to predict and manipulate behaviours, even generating false simulated realities," Sanyal noted.
According to Sanyal, two primary strategies exist for AI governance at present.
"Following an executive order on October 30, the United States has mostly adopted a hands-off policy, placing its trust in the AI industry's self-regulatory practices. On the other hand, the European Union's Artificial Intelligence Act (2023) adopts a more directive method, categorizing AI systems based on perceived risks and applying corresponding regulatory measures. However, this method is only effective for systems that are static and linear with foreseeable risks," he noted.
"AI, on the other hand, embodies the characteristics of complex adaptive systems (CAS), in which the interactions and evolution of its components are dynamic and nonlinear," he said.
He highlighted that AI systems with dynamic interactions between components, emergent behaviour and non-deterministic evolutionary paths exemplify CAS.
"Their multifaceted feedback loops, susceptibility to nonlinear phase transitions and sensitivity to initial conditions defy forecasting. This uncertainty underscores the need for an alternative regulatory approach," Sanyal said.
Sanyal and Dudani have proposed a third approach based on CAS thinking with five principles:
Establishing clear guardrails and partitions to prevent AI systems from engaging in high-risk behaviors and to isolate systems to contain failures.
Mandating manual overrides and chokepoints, especially in critical infrastructure, to maintain human control.
Requiring transparency and explainability, with open licensing of core algorithms and detailed "AI factsheets" to allow for informed use and accountability.
Defining clear lines of accountability, with predefined liability protocols to ensure that there is always a responsible party in case of AI malfunctions.
Creating a specialised regulatory body that can quickly adapt to changes in AI technology, similar to financial regulators like SEBI.