(Bloomberg Opinion) — Gary Gensler, the chief U.S. securities regulator, tapped Scarlett Johansson and Joaquin Phoenix's “Her” last week to help explain his concerns about the risks of artificial intelligence in finance. Money managers and banks are rushing to adopt a variety of AI-generating tools, and the failure of one of them could cause chaos, just as the AI companion played by Johansson left Phoenix's character and many others heartbroken.
The problem of critical infrastructure is not new, but large language models like OpenAI's ChatGPT and other modern algorithmic tools present uncertain and new challenges, including automated price agreements, or breaking the rules and lying about it. Predicting or explaining an AI model actions are often impossible, making things even more complicated for users and regulators.
The Securities and Exchange Commission, which Gensler chairs, and other watchdogs have examined the potential risks of widely used technology and software, such as large cloud computing companies and the near-ubiquitous risk and portfolio management platform. of BlackRock Inc. This summer the global IT crash caused by cybersecurity firm CrowdStrike Holdings Inc. it was a stark reminder of the potential pitfalls.
Just a few years ago, regulators decided not to label such infrastructure as “systemically important,” which could have led to tougher rules and oversight around its use. Instead, last year the Financial Stability Board, an international panel, drafted guidelines to help investors, bankers and supervisors understand and monitor the risks of defaults in critical third-party services.
However, generative AI and some algorithms are different. Gensler and his peers around the world are playing catch-up. One concern about BlackRock's Aladdin was that it could influence investors to make the same types of bets in the same way, exacerbating herd-like behavior. Fund managers argued that their decision-making was separate from the support Aladdin provides, but this is not the case with more sophisticated tools that can make choices on behalf of users.
When LLMs and algos are trained on the same or similar data and become more standardized and widely used for trading, they can very easily follow copycat strategies, leaving markets vulnerable to sharp changes. Algorithmic tools have already been blamed for rapid crashes, such as in the yen in 2019 and the British pound in 2016.
But that's just the beginning: As cars get more sophisticated, the dangers get weirder. There is evidence of collusion between algorithms—whether intentional or accidental is not quite clear—especially among those built with reinforcement learning. A study of automated pricing tools provided to gasoline retailers in Germany found that they quietly learned secret strategies that increased profit margins.
Then there is dishonesty. One experiment instructed OpenAI's GPT4 to act as an anonymous stock market trader in a simulation and was given a liquid insider tip that traded even though it was told it wasn't allowed. Moreover, when asked by her “manager” he hid the fact.
Both problems arise in part from giving an AI tool a single targetsuch as “maximize your profits”. This is also a human problem, but AI is likely to be better and faster at doing it in ways that are hard to track. As generative AI evolves into autonomous agents that are allowed to perform more complex tasks, they may develop superhuman abilities to follow the letter rather than the spirit of financial rules and regulations, according to researchers at the Bank for International Settlements. (BIS) enter it a working paper this summer.
Many algorithms, machine learning tools, and LLMs are black boxes that do not operate in predictable, linear ways, which makes their actions difficult to explain. BIS researchers noted that this could make it much more difficult for regulators to spot market manipulation or systemic risks until the consequences come.
The other sharp question this raises: Who is responsible when cars do bad things? Attendees at a forex-focused trading technology conference in Amsterdam last week were chewing over just that topic. One trader lamented his loss of agency in an increasingly automated trading world, telling News Bloomberg that he and his peers had become “just algo DJs” choosing only which model to bring.
But the DJ picks the tune, and another attendee worried who's holding the can if an AI agent wreaks havoc on the markets. Will it be the trader, the fund that employs them, its compliance or IT department, or the software company that supplied it?
All of these things need to be worked out, and yet the AI industry is developing its tools and financial firms are rushing to use them in a multitude of ways as quickly as possible. Safer options are likely to keep them contained specific and limited tasks for as long as possible. This would help give users and regulators time to learn how they work and what handrails can do – and if they go wrong, the damage will also be limited.
Potential gains in the supply mean investors and traders will struggle to keep up, but they should heed Gensler's warning. Learn from Joaquin Phoenix in “It” and don't fall in love with your cars.
More from Bloomberg Opinion:
- Big AI users fear being held hostage by ChatGPT: Paul J. Davies
- Salesforce is one The dark horse in Chariot Race AI: Parmy Olson
- How many bankers Need to change a light bulb?: Marc Rubinstein
Want more Bloomberg Opinion? Opina
To contact the author of this story:
Paul J. Davies in (email protected)