The banking and finance sector should develop a shared responsibility framework with their artificial intelligence partners to address fraud, scams and ransomware attacks arising from AI, Acting Comptroller of the Currency Michael Hsu said Thursday.
While speaking at the Financial Stability Oversight Council’s AI Conference, Hsu highlighted how AI’s use can threaten financial stability.
“From a financial stability perspective, AI holds promise and peril from its use as a tool and as a weapon,” Hsu said Thursday. “The controls and defenses needed to mitigate those risks vary depending on how AI is being used. At a high level, though, I believe having clear gates and a shared responsibility model for AI safety can help.”
Cloud computing’s shared responsibility model could be used as a blueprint, Hsu said.
“In the cloud computing context, the ‘shared responsibility model’ allocates operations, maintenance and security responsibilities to customers and cloud service providers depending on the service a customer selects,” he said. “A similar framework could be developed for AI.”
The newly established U.S. Artificial Intelligence Safety Institute, within the National Institute of Standards and Technology, can devise a shared responsibility framework from its consortium of more than 280 stakeholder organizations.
Hsu’s remarks come as Treasury Secretary Janet Yellen on Thursday warned that using AI in finance could lower transaction costs but carries “significant risks.”
“Specific vulnerabilities may arise from the complexity and opacity of AI models, inadequate risk management frameworks to account for AI risks, and interconnections that emerge as many market participants rely on the same data and models,” Yellen said at Thursday’s conference.
The Treasury Department is seeking public comments on the use of AI in the financial sector.
Hsu, meanwhile, used electronic trading as an example of technology that starts as a novelty and slowly becomes more trusted. AI follows a similar path, he said, because it was first used to produce inputs for human decision-making, then as a co-pilot to enhance human actions, and finally as an agent executing decisions on humans’ behalf.
However, banks adopting AI must now set up “clear and effective gates” between phases to ensure safety.
“Before opening a gate and pursuing the next phase of development, banks should ensure that proper controls are in place and accountability is established,” Hsu said.
Hsu noted the unexpected or unintended consequences of over-reliance on AI.
He cited Jake Moffat, who, in November, took a chatbot’s help to find the bereavement rate on Air Canada’s website when his grandmother passed away. Moffat was booking a flight to Toronto. The chatbot suggested he book a flight and ask for a refund within 90 days, Hsu said. However, Air Canada prohibits retroactive refunds for bereavement flights. Moffat sued the airlines, but Air Canada argued it “cannot be liable for the information provided by a chatbot.” Moffat won the case, Hsu said.
With chatbots, companies can struggle to identify who is liable for what and how to fix it, Hsu said. The same questions arise when AI is used in credit underwriting, and consumers are denied credit cards based on an AI algorithm — a decision that is hard to explain, he said.
“With AI, it is easier to disclaim responsibility for bad outcomes than with any other technology in recent memory. The implications for trust are significant,” Hsu said. “Trust not only sits at the heart of banking, it is likely the limiting factor to AI adoption and use more generally.”
Hsu also noted the proliferation of deepfakes that have evolved from simple voice tricks, as well as more sophisticated deceptions used in big-ticket heists.
Though these incidents have been manageable, they could have a more significant financial impact as criminals become more advanced in using AI, Hsu said.
In December, Sens. Mark Warner, D-VA, and John Kennedy, R-LA, introduced legislation mandating that the FSOC coordinate regulatory efforts aimed at protecting markets from the potential disruptive impacts of deepfakes, trading algorithms and other AI tools that could destabilize the financial system. Yellen backed the legislation in February.
“An increase in AI-powered fraud could sow the seeds of distrust more broadly in payments and banking,” Hsu said Thursday.