As AI accelerates at the speed of change, some key stakeholders have asked a question: Are we paying enough attention to the risks?
Governments around the world have responded with policies and legislation designed to strike a balance. On the one hand, they want to protect the public. On the other, they want to support the growth of promising new technology.
What is becoming clear is that trust with AI is hardly 鈥榡ust鈥� a compliance issue. Customers are increasingly demanding transparency regarding how companies are using AI. By taking a proactive approach, leaders of legal departments can do more than stay compliant: They can inspire confidence, boost their organization鈥檚 brand and drive value.
This article brings a legal lens to the areas where AI pitfalls arise 鈥� and zooms in particularly on the regulatory landscape in the EU and the UK. With a broad organizational perspective, the article aims to empower in-house legal leaders to calibrate AI risk amid its vast opportunity.
Key themes and takeaways
-
Regulatory and governance landscape varies across jurisdictions
The regulatory landscape is diverse. For one example, compare the EU and the UK. The EU AI Act, which came into force in August 2024, defines and sets out specific requirements for high-risk AI systems. In the UK, on the other hand, the government has yet to introduce legislation. Instead it is creating industry-specific rules based on a consistent set of principles 鈥� which mirror those used in the EU.
-
Ethical considerations remain top of mind
Items such AI transparency, explainability, data quality and equity, bias, discrimination and automated decision-making need to be addressed amid mistrust about AI use. Transparency and explainability in particular raise questions for the public, as explaining complex AI processes and outcomes can be difficult. Businesses are responding in part by establishing ethics boards.
-
Third-party risk can be significant but is sometimes overlooked
Suppliers used by a business will very likely use AI, so businesses need to also assess the details of how key suppliers are using the technology. Legal departments also often use third parties to provide IT systems, and suitable due diligence and contractual protection will need to be in place.
-
Protecting personal data is critical
Even in the absence of AI-specific regulation, personal data use must meet relevant data-protection laws. In Europe, GDPR transparency requirements dictate that privacy notices clearly provide suitable information to data subjects.
-
AI and copyright often stand at cross-purposes
The tension between AI innovation and the creative industry has been the cause of much debate among corporate and business leaders. In legal departments, GCs and CLOs should consider the extent to which they are entitled to train AI with advice, contracts, documentation and personal data gathered in the course of supporting clients or managing the business. Many bar associations have issued clear guidelines regarding the use of generative AI (Gen AI).
-
Technological advancement continues to drive innovation and legislation
The development of new technologies such as general purpose AI (GPAI) models emphasizes that innovation is setting the pace of change for AI 鈥� and ultimately for legislation too. For legal professionals, the challenge remains to stay abreast of any legislative changes and how they impact the organization.
-
AI is bringing change and opportunity to the legal sector
Using AI in general, and Gen AI in particular, can improve how in-house legal teams (and law firms) work. It will also mean new opportunities for in-house legal professionals to develop new skillsets, with their roles augmented by a collaboration with AI technologies. It is therefore a good time for legal leaders to think through the evolving role of corporate legal teams and how they will make the most of AI.
