Tech
Sanjeev Sanyal
Feb 12, 2024, 12:32 PM | Updated 01:15 PM IST
Save & read from anywhere!
Bookmark stories for easy access on any device or the Swarajya app.
One of the most damaging manifestations of human hubris is the belief that the future is mostly predictable, if only we could find the correct algorithm.
This belief in the perfect algorithm is behind efforts that range from “Five Year Plans” and Karl Marx’s dialectic explanation of history to confident television punditry on everything from geopolitics to stock markets.
The advent of excel sheets and easily-accessible regression models has given the approach an intellectual veneer and a pretence that it is all based on an objective methodology.
Amazingly, this hubris has been little affected by the glaring failure of all these “forecasting models” to predict the most consequential turns in human history — the collapse of the Soviet Union, the global financial crisis of 2007-08, the Covid-19 pandemic and so on.
Even when a model (or expert) occasionally succeeds, it rarely repeats the performance. Even monkeys throwing darts will occasionally hit the bulls-eye.
Artificial intelligence (AI) is about to provide this human hubris with a dangerous new tool. Its ability to mine huge amounts of data, and make very specific recommendations, will give consultants, social scientists (especially economists), and other tarot-card readers with an utterly unfounded new confidence that they can finally predict the evolution of the world.
This overconfidence will be bolstered by the fact that largescale data mining may occasionally throw up genuine insights on how the world functions.
One area where there is already a great deal of enthusiasm about the ability of AI to mine data is in the field of finance. Fintech startups and even traditional banks are taken up with the idea that once algorithms have extensively mined piles of data, it would be possible to mechanically make the perfect loan decision.
There are some dangers that are already visible. First, the whole approach is based on rear-view data, whereas the whole point about financial risk management is about resilience to unpredictable, random shocks of the future.
There is no harm in using AI as an additional tool, but there is a danger that financiers will soon become slaves of the tool and forget basic principles. This is what happened with complex derivatives in the financial crisis of 2007-8.
Second, different AI models will likely also arrive at similar conclusions if they are fed similar data pools. This means that it will cause self-reinforcing loops that could lead to systemic group-think, excessive concentration and blind spots.
We can all see how this happens with social media algorithms. It is more than likely that something similar will happen with fintech algorithms.
Third, the focus on squeezing insight from pre-existing databases means that new innovations and segments could be starved of finance just because they do not yet have a “track-record” of past performance.
Data mining works in stationary fields where structural relationships are broadly static but not when there are structural breaks. Ironically, the greater deployment of AI could end up discouraging innovation in other fields.
It is not the purpose of this article to discourage the use of AI-based analysis but merely to point out that it is just another useful tool, and that its capabilities are more limited than its enthusiasts propound.
The risk is that, as AI-based analytical tools proliferate, it will be blindly deployed in everything from banking and investing to policy-making and academia. Given its emergent properties, AI-based systems could even “grow” themselves into new areas.
Very soon it would then become Black-Scholes pro-max (refers to the mathematical pricing of derivatives that was partly responsible for the global financial meltdown of 2007).
We live in a complex world. Dealing with it requires that we accept that it is fundamentally non-deterministic and unpredictable. While one can make an educated guess over the short term, most forecasting tools fail in the long run.
At best we can make some general statements about direction and possible scenarios. This is why activities like finance and economic policy-making are partly an art rather than a hard science. The faddish use of AI will not change this.
As things stand, AI-based systems are like overconfident children and their proponents are currently behaving like indulgent parents. If not disciplined early, however, they will grow up to be bullies that could potentially cause a lot of damage.
(The author is an economist and bestselling writer. All opinions are personal).