The advent of advanced technologies like artificial intelligence (AI), quantum computing and blockchain presents an urgent need for organisations to update their existing approach to risk management.  

With people at the heart of a successful risk management strategy, organisations must determine how to adapt the widely accepted three lines of defence (3LoD) model� operational management, risk and compliance, and internal audit roles—to ensure continued robust governance.

For over a decade, the model has helped organisations manage risks with roles and responsibilities clearly spelled out. For the framework to stay effective, though, it must evolve.

The first line: Front-line staff and operational management

As advanced technologies become core to business, staff running day-to-day operations must be equipped with the necessary skills and capabilities to understand and manage the risks inherent to these technologies—risks such as data breaches, operational disruption, reputational damage from poorly designed solutions, and failure to comply with regulations.

Take AI-driven solutions, for instance. These must comply with organisationsâ€� risk policies and regulatory guidelines to ensure that existing technology-related risks are addressed, while recognising the risks that are unique to the non-deterministic nature of AI and its broader impact on the stakeholder ecosystem in which it operates.  

Organisations must also rethink data governance by proactively identifying and mitigating the accompanying risks instead of relying solely on traditional approaches that focus on compliance. This requires adapting their governance processes to manage the data that is generated, shared and ingested by AI models on an ongoing basis. Solutions include promoting accountability and a culture of data stewardship to ensure that access to data is not overly permissive through the use of integrated AI as a channel, and that such data used for model training is representative of real-world intentions.

This approach also resonates with regulatorsâ€� increasing expectations that organisations implement robust internal controls to deal with the risks of using advanced technologies—for example, ensuring personally identifiable information (PII) is not exposed on public platforms. Businesses will therefore also need to integrate real-time monitoring and safeguards into the technology pipeline to ensure compliance with regulatory requirements, and legal and ethical standards.

The second line: Strengthening risk and compliance frameworks

Second-line staff are responsible for strengthening risk management processes by enhancing policies and frameworks, communicating such frameworks to the first line and conducting compliance checks.

However, as advanced technologies start to dominate, second-line staff must shift from being reactive to being proactive, and take an agile, forward-looking approach. This starts with upskilling in areas like data ethics, cyber-risks and AI governance. Additionally, fulfilling their role requires leveraging real-time risk analytics, collaborating across functions on product development, and applying scenario-planning to understand emerging tech risks. 

In driving improvements, organisations should consider following the tenets of AI risk management in standards like , which has been adopted on a national scale in countries like Singapore and Australia. Such standards play an important role by offering guiding principles about processes to follow when establishing an AI risk management programme, instead of dictating the technology to use or the compliance steps to take.  

This higher-level approach ensures the second line can better fulfil its oversight function while developing policies and frameworks which are flexible, dynamic and principles-based. That in turn helps the organisation to update its risk management policies to keep pace with emerging threats while remaining open to innovation.

The third line: Upskilling internal audit

Internal auditors usually comprise business auditors with strong domain knowledge, as well as technology auditors who can assess technical controls and cyber-security.

In today’s Intelligent Age, systems involving AI and related technologies add complexities which many internal auditors may not be well-equipped with skills to assess. This could impact their ability to ask data scientists the necessary questions when auditing AI-centred processes or functions. Consequently, organisations must rethink their risk management strategies for this line of defence too.

Some organisations have embedded data scientists in audit teams to bridge these knowledge gaps. However, to sustain this initiative, organisations must focus on upskilling their technology and business auditors to be more proficient in understanding AI deployment, technical set-ups and processes for the technology’s implementation. Internal auditors can start with foundational training in AI and subsequently reference learning roadmaps to deepen the knowledge for technology auditors within the team.

Ultimately, the advent of advanced technologies brings the potential to evolve and streamline the 3LoD model, enabling organisations to have a common view of risks and control metrics across their three lines of defence instead of running separate reviews. This improves each line’s ability to have a consistent interpretation of risks and control effectiveness, while allowing for real-time updates on risk posture.

Amid disruptions from AI and other advanced technologies, organisations which proactively seize new opportunities will find themselves ahead of the curve. However, the need for robust risk management cannot be ignored: As the world moves into the Intelligent Age, organisations must refresh their frameworks—not only to stay relevant but to strengthen trust and be in front of industry change. 

Related content

Get in touch

Connect with us