Continue reading this on our app for a better experience

Open in App
Floating Button
Home Views Artificial Intelligence

What companies can do to prepare for AI regulations

François Candelon, Rodolphe Charme di Carlo, Midas De Bondt and Theodoros Evgeniou
François Candelon, Rodolphe Charme di Carlo, Midas De Bondt and Theodoros Evgeniou • 8 min read
What companies can do to prepare for AI regulations
While privacy and data related laws already impact companies in material ways, they are probably only the tip of the iceberg.
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Last month, WhatsApp was hit with a EUR225 million ($352.25 million) fine for violating Europe’s privacy rules. It is the largest fine ever from the Irish Data Protection Commission and the second-highest under EU GDPR rules. Meanwhile, China just passed its first data protection law — the Personal Information Protection Law (PIPL) — a game-changer for companies with data or businesses in China. While privacy and data related laws already impact companies in material ways, they are probably only the tip of the iceberg of technology regulations yet to come.

Attention is now shifting to how data is used by the software — particularly by Artificial Intelligence (AI) algorithms that can diagnose cancer, drive a car, approve a loan or assess examination grades. For example, the EU considers regulation to be essential to the development of AI tools that consumers can trust, and China puts additional legal responsibilities on algorithm owners operating in a number of areas such as web search, e-commerce or short-video sharing. Companies will need to prepare for these upcoming regulations both to avoid regulatory risks but also to maintain their stakeholders’ trust. In our work, we explore three main risks organisations face as they integrate AI into their business.

Unfair outcomes: The risks of using AI
AI systems that produce biased results have been making headlines. Unfair outcomes can crop up in online advertisement algorithms, which target viewers by age, race, religion or gender, but the stakes can be higher. For example, one study published in the Journal of General Internal Medicine found that the software used by prominent hospitals to prioritise recipients of kidney transplants discriminated against Black patients. In most cases, the problem stems from the data (created by people) used to train the AI. If that data is biased, then the AI will acquire and may even amplify the bias.

In theory, it might be possible to code some concept of fairness into the software. For example, Amazon is experimenting with a fairness metric called conditional demographic disparity. But one hurdle is that there is no agreed-upon definition of fairness. In dealing with biased outcomes, regulators have mostly fallen back on standard anti-discrimination legislation. But with AI increasingly in the mix, accountability is challenging. Worse, AI increases the potential scale of bias: Any flaw could potentially affect millions of people.

What can businesses do to head off such problems? Firstly, before making any decision, they should deepen their understanding of the stakes by exploring four factors:

  • The impact of outcomes: Some algorithms make decisions with direct and significant consequences on people’s lives. Under some circumstances, it may be sensible to avoid using AI or at least integrate human judgment into the process. However, in the latter approach, the fairness of algorithms relative to human decision-making needs to also be considered when choosing whether to use AI.
  • The nature and scope of decisions: The degree of trust in AI varies with the type of decisions it is used for. When a task is perceived as relatively mechanical — think optimising a timetable or analysing images — software is viewed as trustworthy. But when decisions are thought to be subjective, human judgment is often trusted more.
  • Operational complexity and limits to scale: An algorithm may not be fair across all geographies and markets. Adjusting for variations among markets adds layers to algorithms, pushing up development costs. Customising products and services raise production and monitoring costs significantly. If the costs become too great, companies may even abandon some markets.
  • Compliance and governance capabilities: To follow the more stringent AI regulations that are on the horizon, companies will need new processes and tools: algorithmic audits, documentation (for traceability), AI monitoring, impact assessment and others. Google, Microsoft, BMW, and Deutsche Telekom are all also developing formal AI policies with commitments to safety, fairness, diversity, and privacy. Some companies have even appointed chief ethics officers to oversee the introduction and enforcement of such policies.

Transparency: Explaining what went wrong
Just like human judgment, AI is not infallible. Algorithms will unavoidably make some unfair — or even unsafe — decisions. When that happens, stakeholders need to understand how the decisions are made — and correct them when needed. So should we require — and can we even expect — AI to explain its decisions?

Regulators are certainly moving in that direction. The GDPR already describes “the right … to obtain an explanation of the decision reached” by algorithms, and the EU has identified explainability as a key factor in increasing trust in AI.

But what does it mean to get an explanation for automated decisions, for which our knowledge of cause and effect is often incomplete? Business leaders considering AI applications also need to reflect on two factors:

  • The level of explanation required: With AI algorithms, explanations are broadly classified into two groups: Global explanations and local explanations, with the former meaning complete explanations for all outcomes of a given process and describing the rules or formulas specifying relationships among input variables. Local explanations offer the rationale behind a specific output — say, why one applicant (or group of applicants) was denied a loan while another was granted one. It can take for example the form of statements that answer the question, What are the key customer characteristics that, had they been different, would have changed the output or decision of the AI?
  • The trade-offs involved: The most powerful algorithms are often opaque. One example is Alibaba’s Ant Group whose MYbank uses AI to approve small business loans in under three minutes without human involvement. To do this, it uses more than 3,000 data inputs from all over the Alibaba ecosystem — hence, clearly articulating how it arrives at specific assessments (let alone providing a global explanation) is practically impossible. Many of the most exciting AI applications require algorithmic inputs on a similar scale. Tailored payment terms in B2B markets, insurance underwriting, and self-driving cars are only some of the areas where stringent AI explainability requirements may hamper companies’ ability to innovate or grow.

There are, however, some opportunities. Explainability requirements could offer a source of differentiation. If Citibank, for example, could produce explainable AI for small-business credit that is as powerful as Ant’s, it would certainly dominate the EU and US markets.

The bottom line is that although requiring AI to provide explanations for its decisions may seem like a good way to improve its fairness and increase stakeholders’ trust, it comes at a stiff price — one that may not always be worth paying. In that case, the only choice is either to go back to striking a balance between the risks of getting some unfair outcomes and the returns from more accurate output overall or to abandon using AI.

Learning and evolving: A shifting terrain
One distinctive feature of AI is its ability to learn. But there are drawbacks to continuous learning: Although accuracy can improve over time, the same inputs that generated one outcome yesterday could register a different one tomorrow because the algorithm has been changed by the data it received in the interim.

In figuring out how to manage algorithms that evolve — and whether to allow continuous learning in the first place — business leaders should focus on two aspects:

  • Risks and rewards: continuous learning can improve the accuracy and quality of AI outcomes but can also lead to unexpected outcomes. When stakes are high, companies may need to consider locking their algorithms.
  • Complexity and cost: companies may find themselves running multiple algorithms across different regions, markets, or contexts, each of which has responded to local data and environments.

To cope with all the new AI risks and regulations, organisations will need to create new sentinel roles and processes to make sure that all algorithms are operating appropriately and within authorised risk ranges. Chief risk officers may have to expand their mandates to include monitoring AI as they evolve. Unless companies engage early with these challenges, they risk eroding trust in AI-enabled products and triggering unnecessarily restrictive regulations, which would undermine not only business profits but also the potential value AI could offer consumers and society.

François Candelon is a managing director and senior partner at the Boston Consulting Group and the global director of the BCG Henderson Institute. Rodolphe Charme di Carlo is a partner in the Paris office of the Boston Consulting Group. Midas De Bondt is a project leader in the Brussels office of the Boston Consulting Group. Theodoros Evgeniou is a professor at Insead

Photo: Unsplash

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.