乐鱼(Leyu)体育官网

AI risks and governance

Key risks, governance challenges and what banks should do to prepare for responsible AI adoption

April 2025

Artificial intelligence (AI) continues to transform the European financial services landscape. The start of 2025 has seen significant milestones in the EU regulatory framework for AI. Meanwhile away from the headlines many banks are pressing ahead with deployment of AI in a wide range of use-cases, with the promise of significant benefits to efficiency and customer service.

The evolving legal framework

In February 2025, the first legal obligations under the (passed into law last summer) took effect:

  • The prohibition on AI systems posing 鈥榰nacceptable risk鈥� to citizens鈥� safety and fundamental rights
  • The requirement for organisations using AI to ensure adequate 鈥楢I literacy鈥� of their staff.

The immediate impact of these requirements on financial firms will likely be limited. The prohibition applies mainly (though not exclusively) to public authorities and focuses primarily on civil liberties issues most of which are not directly linked to financial services business. Indeed, from the European Commission (EC) EU AI Office made clear that the prohibition on 鈥榮ocial scoring鈥� is not intended to ban legitimate activities such as credit scoring or insurance risk profiling. Further on the definition of an AI system also explicitly stated that 鈥渧ast majority of systems 鈥� will not be subject to any regulatory requirements under the AI Act.鈥� The guidelines also gave further comfort to banks by clarifying that linear and logistic regression models do not qualify as AI systems. This removed the prospect that large numbers of existing credit models (where linear and logistic regressions are widely used) would be considered AI 鈥� and therefore high-risk AI systems 鈥� under the AI Act.

Limiting the impact of AI Act obligations is in line with the EC鈥檚 wider commitment to (as is its in February of a proposed AI Liability Directive). Less expansive regulatory requirements will certainly be welcomed by banks. The risks from AI will, however, still need to be prudently managed: strong AI governance will be central to achieving this.

Real-world deployment

Alongside regulatory developments European banks鈥� use of AI is accelerating. Having tested potential applications over the past 12 months, many banks are now moving from proof of concept to scaling up AI applications across their businesses. Fraud detection and business process automation remain the main areas of focus, as banks identify opportunities to replace manual processing and so increase speed, efficiency and accuracy. This is already producing significant cost reductions at banks on the technological frontier.

By contrast credit and liquidity modelling have received less attention. This is in part because existing models are already highly sophisticated statistical systems. This reduces the potential gains from using AI in these fields. It also reflects regulatory requirements for new credit and liquidity models to be pre-approved by supervisors. As many banks have experienced, getting supervisory approval can be a lengthy and resource-intensive process. This is a further challenge to the business case for employing AI in these use-cases.

AI risk taxonomy

As their use of AI grows, so too should banks鈥� approach to managing AI risks. Broadly these risks can be thought of under three headings:

  • Model risk: risk of models producing inaccurate, unreliable or unwanted outputs
  • Human risk: risk of users misinterpreting or misusing model outputs
  • Adversarial risk: risk either of models or data being compromised by external actors, e.g. via cyberattacks; or of AI being used to enhance 鈥榗lassic鈥� forms of external attack, e.g. via deepfake fraud.

Managing these risks appropriately will involve multiple functions within a bank. These will include first-line business units as well as multiple control functions from 鈥榯raditional鈥� model risk management, to legal, compliance, human resources, operational risk and IT security divisions. AI risk management is thus an inherently multidisciplinary, cross-function activity and banks鈥� AI governance frameworks need to reflect that.

AI governance models

Organisationally, we have seen banks appear to be adopting a range of models with the aim of ensuring ensure that different AI risk management stakeholders are brought together. Some have given lead responsibility to one control function, usually model risk management. Others have followed a more collegiate 鈥榝orum鈥� approach. Both options bring advantages and disadvantages: centralisation can support faster decision-making, while broader committees may better ensure that all voices are heard. The right structure for each bank will depend on its overall business model and the range of use-cases where AI is deployed.

So far supervisors have not prescribed specific models or structures for managing AI risk. (Nor does the AI Act specify precisely how firms should comply with its governance requirements.) Indeed, the European Central Bank (ECB) has not yet issued any specific guidance on banks鈥� use of AI. (The ECB鈥檚 should include some expectations on the use of machine learning: this is expected to be published in the summer.)

Ensuring effective governance is, however, a key priority for the ECB. Supervisors will likely see inadequate AI governance as symptomatic of poor governance more broadly. This is a further reason why banks should put a robust AI governance framework in place. At a minimum this should include these five key elements:

  • AI principles: Banks should adopt a clear set of principles and commitments to responsible and trustworthy use of AI;
  • AI risk appetite: Banks should include AI risk in their overall Risk Appetite Framework, to ensure proper attention to AI risk management;
  • AI catalogue: Banks should compile and maintain a thorough and up to date inventory of all their AI systems and the different risks associated with each;
  • AI risk management: Banks should adopt a comprehensive framework of policies and procedures for managing and mitigating AI risk, in accordance with their risk appetite framework, including roles for each of the three lines of defence;
  • AI oversight: Banks should establish a clear structure for oversight of AI applications to assess and ensure compliance with internal policies as well as legal obligations.

Compliance with the AI Act is the foundation of AI governance. But good governance is not only a matter of compliance: it is also key to winning trust and acceptance of a banks鈥� AI deployment in the eyes of customers, staff and the wider public. This should allow banks to fully capture the benefits of a revolutionary new technology.

Our insights

Setting the ground rules: the EU AI Act

Understanding the regulatory landscape and preparing for the AI future

Subscribe to 乐鱼(Leyu)体育官网's "SSM Insights" newsletter

Our 乐鱼(Leyu)体育官网 ECB Office Newsletter provide news and insights into issues relating to the Single Supervisory Mechanism (SSM).


Our people

Matthias Peter

Partner, Financial Services

乐鱼(Leyu)体育官网 in Germany

Benedict Wagner-Rundell

Senior Manager

乐鱼(Leyu)体育官网 in Germany


Connect with us

乐鱼(Leyu)体育官网 combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat