Continue reading this on our app for a better experience

Open in App
Floating Button
Home Views Artificial Intelligence

Without regulator buy-in, scaling AI in financial services will be an uphill battle

Nerijus Zemgulys and Pat Patel
Nerijus Zemgulys and Pat Patel • 5 min read
Without regulator buy-in, scaling AI in financial services will be an uphill battle
The risks associated with AI continue to grow. Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Financial institutions have made important attempts to create room for artificial intelligence (AI) in the industry over recent years.

These efforts, however, are often frustrated by disconnected or incomplete AI regulatory frameworks.

There are multiple frameworks around AI regulation, dependent on the geography of operations.

Standardising these regulatory frameworks for AI in finance is essential as the industry continues to innovate.

Major financial markets of the US, UK, Canada and Europe have introduced their own regulatory frameworks, with some overlapping themes.

While each framework provides a slightly different take on regulation for AI, transparency and safety are two key elements echoed across these regions.

See also: 82% of Southeast Asia CFOs and tax leaders believe GenAI will drive efficiency and effectiveness: EY report

As we look to empower further enhancements to the industry through the use of AI technologies, what can regulators do to effectively scale AI in financial services?

The right capabilities for regulators

Regulators must work to continuously equip themselves with the capabilities to keep up with the rapidly changing AI landscape.

See also: Anthropic CEO says mandatory safety tests needed for AI models

These changing industry dynamics are explored in detail in the recent report, AI in Financial Services: Making it Happen at Scale, produced in partnership between Boston Consulting Group (BCG) and Elevandi — an organisation established by the Monetary Authority of Singapore to foster an open dialogue between the public and private sectors to advance fintech in the digital economy.

Most regulators across the world are not natural digital natives.

A survey of over 1,200 government officials from over 70 countries concluded that nearly 70% lag behind the private sector in digital transformation.

However, without a clear understanding of how AI works and its actual and possible impacts, regulators will struggle to implement the right guardrails to protect consumers while enabling innovation and growth.

AI literacy can help regulators across the financial services industry, with real benefits in monitoring and understanding how to deal with financial crime.

When it comes to generative AI (GenAI), this includes assessing and mitigating risks such as hallucination from inaccurate responses, bias built into models, fraud and misinformation, among other risks.

There are a wide variety of attractive use cases for AI in finance.

Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends

For instance, it is already being deployed for key activities such as anti-money laundering (AML), credit and regulatory capital modelling, insurance claims management, product pricing, order routing and trades as well as cybersecurity.

Regulators should learn how to assess and solve these known risks while establishing guidelines and governance requirements to address them.

At the same time, it is important to avoid being too prescriptive on regulations and allow flexibility within clear safety guidelines to enable use cases to flourish.

An effective regulatory framework needs to balance a regulator’s desire to mitigate risks and protect customers versus supporting industry innovation and evolution.

The case of Dutch digital challenger bank bunq offers a fascinating example of this difficult balance.

In 2020, Dutch central bank De Nederlandsche Bank (DNB) banned bunq from using AI to monitor its AML risks, at a time when AI was seen as a key cause of concern for authorities.

Just a few years later, the Dutch courts ruled that DNB was wrong to have banned bunq and enforced manual processes for banks.

The ruling provided legal certainty to allow financial institutions to introduce AI solutions as part of efforts to maintain compliance and fight financial crime.

Adapting to new opportunities

Rather than treating AI as a new technology to be regulated against, regulators need to look for strategies to tackle underlying risks rather than limiting implementation of the technology itself.

Taking an outcome-based approach, instead of a reactionary one, can help regulators tread this tightrope.

This could include elements such as extending current policies to deal with the real implications and risks of various use cases of AI.

Regulators can also look to contextualise the use cases to better identify and tackle risks, illustrated in areas where AI can deliver improvements with non-material risks.

There are numerous examples of this, for instance using GenAI to boost the efficiency of writing code, or to improve the speed by which internal documents can be searched and sorted.

Efforts to participate and empower AI development in financial services can also position regulators as valued partners rather than barriers to adoption.

Providing sandboxes for the private sector to test AI use cases in a controller environment can help in- novation while protecting customers.

Singapore’s Infocomm Media Development Authority and the AI Verify Foundation unveiled a GenAI evaluation sandbox in late 2023.

The sandbox brings together more than 10 global players to enable the evaluation of trusted AI products.

Spain also recently approved an AI regulatory sandbox in anticipation of the upcoming EU AI Act 7 — a comprehensive new piece of legislation which will regulate the use and deployment of AI applications.

These examples show how sandboxes can be used to provide spaces for innovation to thrive while allowing for industry to explore new AI functions and capabilities.

The reality is that with clear regulatory buy in, backed by standardised regulatory frameworks, the ecosystem can be made safer, more transparent and more conducive to innovation.

This sets out the foundations to embed AI in financial systems and al- low the industry to move in the right direction.  

Nerijus Zemgulys is managing director and partner at Boston Consulting Group while Pat Patel is an executive director at Elevandi

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.