Will scarce data on rare events mean that AI in finance will lead to the next financial crisis?
As statisticians, we often favour interpretable models that can perform better than AI when sample sizes are small. A new article published online last month by Danielsson and Uthemann of LSE, ‘On the use of artificial intelligence in financial regulations and the impact on financial stability’[1], examines the increasing use of artificial intelligence (AI) by financial authorities and its implications for both micro-regulations and macro-regulations. While micro-regulations may benefit from AI due to the abundance of data, clear objectives, and repeated decisions, macro-regulations face challenges such as infrequent unique events and the complexity of the financial system. This article proposes six criteria for evaluating the suitability of AI use in the private sector and financial regulation. It also emphasizes the distinction between micro and macro-regulations and highlights the potential risks and benefits associated with AI implementation in the financial sector. The discussion touches upon the challenges AI faces in understanding the financial system and the implications for decision-making during crises. It also underscores the importance of accountability and oversight in AI-driven decision-making processes.
Image credit: Northern Rock Queue 2007, by Dominic Alves, Brighton, U.K. Reused under the CC BY 2.0 license
On the use of artificial intelligence in financial regulations and the impact on financial stability [1] explores the growing role of artificial intelligence (AI) in financial authorities and its implications for both micro-regulations and macro-regulations within the financial sector. It argues that while AI holds significant promise in enhancing micro-regulations, dealing with daily matters like risk management and consumer protection, its application in macro-regulations, which focus on the overall stability of the financial system, faces substantial challenges.
The authors introduce six criteria for evaluating the suitability of AI use in the private sector and financial regulation. These criteria serve as essential guidelines for determining the effectiveness of AI implementation. They include the availability of sufficient data, the stability of regulatory rules, the ability to establish clear objectives, the capacity for independent decision-making by AI, the establishment of accountability mechanisms to address mistakes and misbehavior, and an understanding of the potential consequences of errors in the AI-driven decision-making process.
Furthermore, the article delves into the variations in the nature of tasks performed by AI in the financial sector, ranging from providing advice to making independent decisions. It highlights the significance of comprehending AI’s objectives, which can be predefined by human operators or learned from human feedback during the training phase. The authors stress the need for financial authorities to adapt to the increasing prevalence of AI in the private sector and its potential impact on various functions, including regulatory design, supervisory decision-making, and crisis resolution.
The article underscores the challenges associated with AI adoption in the financial sector, particularly in the realm of macro-regulations. It examines the limitations stemming from the scarcity of relevant data, the unique and infrequent nature of financial crises, the complexities of managing dynamic interactions within the financial system, and the difficulties in defining objectives during crisis resolution. The article underscores the importance of understanding the risks and challenges associated with AI use in macro-regulations, emphasizing the need for human oversight and intervention in critical decision-making processes.
Moreover, the article explores the complexities of AI decision-making in the context of crisis resolution, emphasizing the challenges posed by emergent information and interests that are characteristic of crisis situations. It discusses the limitations of AI in comprehending and responding to these emergent factors, highlighting the significance of human intuition and understanding in the effective management of crises.
The article also delves into the potential implications of AI implementation in the financial sector, particularly in terms of its impact on decision-making processes, regulatory oversight, and systemic risk management. It discusses the risks associated with the overreliance on AI in decision-making and emphasizes the importance of maintaining a balance between AI-driven analysis and human intervention.
The article further touches upon the concept of AI explainability and the challenges in making AI decision-making processes transparent and understandable. It underscores the ongoing efforts to address this issue, particularly in the context of regulatory oversight.
Overall, the article underscores the significance of understanding the nuances and complexities of AI implementation in the financial sector, emphasizing the need for a cautious, well-informed, and adaptive approach to AI integration. It stresses the importance of human oversight, accountability, and adaptability in ensuring the effectiveness and stability of financial regulations and crisis resolution processes in an increasingly AI-driven financial landscape.
References
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4604628