Philippe Montigny

Artificial Intelligence: A Revolution in Compliance Risk Mapping

Artificial Intelligence: A Revolution in Compliance Risk Mapping

By Philippe Montigny

June 25, 2024


The New Challenges in Risk Mapping

Over the last decade, compliance has become increasingly complex. Initially confined to financial crime, corruption, and money laundering, compliance now also encompasses geopolitical, societal, and social concerns. For instance, obligations related to embargoes and export controls are of diplomatic concern, those regarding the environment and protection of communities in less developed countries are societal considerations, while obligations on privacy and human rights address social concerns.

Thus, companies must comply with a multifaceted array of laws, regulations, and standards that vary geographically. These obligations also evolve over time with emerging societal expectations and technological advancements such as Artificial Intelligence (AI). In this intricate web of obligations, the task of a Compliance Officer has become particularly complex.

Risk is now multifactorial: it affects supply chains, commercial operations, joint ventures, consortia, acquisitions, partnerships, and even human resource management, including privacy protection. The Compliance Officer’s responsibilities have expanded significantly, requiring multisectoral skills or, at the very least, the ability to manage a team with diverse professional qualifications.

Digitalization of Compliance: An Essential Step

Paradoxically, the multiplication of information sources and data digitalization do not simplify the Compliance Officer’s task. The proliferation of information risks overshadowing significant threats if day-to-day management of minor compliance risks consumes the Compliance Officer’s time, preventing them from addressing long-term, major, or complex risks.

Digitalizing compliance processes is a first response to the proliferation of digital information. Digital compliance avoids redundant data entry and facilitates the identification of missing data. It frees up compliance teams to focus more on risk analysis and identifying mitigation measures.

AI tools integrated into digital compliance solutions take this further, aiding in data analysis and meeting growing reporting demands on compliance risk management.

AI as a Decision Support Tool Amid Data Proliferation

These tools are particularly effective in managing third-party risks faced by companies. Between embargo-related bans, anti-corruption commitments, due diligence obligations, and solvency checks, third-party management exemplifies the multifactorial nature of compliance risk.

When a company deals with thousands or even hundreds of thousands of third parties, AI tools become indispensable for decision-making. Additionally, the data managed by the company is subject to privacy laws, which vary by geography. Here too, AI tools ensure efficient and legally compliant data processing.

Digitalization and AI: An Effective Legal Defense

From a legal perspective, it’s crucial to note that the UK Bribery Act introduced a specific offense of failing to prevent corruption, a concern echoed in many jurisdictions, especially the United States. The U.S. Department of Justice examines the measures companies take to prevent corruption. Previously, defense lawyers highlighted the significant budget allocated to compliance. While still important, prosecutors now also assess the effectiveness of compliance tools and the precision of controls.

In an era where compliance is digitalized and effective AI applications are emerging, the use of these tools demonstrates a company’s genuine commitment to integrity. Conversely, the absence of digital and AI tools casts doubt on the company’s compliance efforts and may lead to harsher penalties for failing to prevent misconduct.

Mapping AI-Related Risks

While AI is a valuable tool for managing corporate compliance risks, several authorities now require companies to implement specific actions to mitigate AI-related risks.

The European legislator, in particular, has mandated risk mapping for AI. The European AI Act of March 13, 2024, explicitly requires companies to evaluate their AI systems against four specifically defined risks.

The Four Risks Defined by European Law

  • Unacceptable Risk: AI systems that violate EU values and fundamental rights must be ceased within six months, including their use or commercialization outside the EU.
  • High Risk: Systems deployed in high-risk products accessing sensitive data (e.g., privacy, health, income). Companies must declare compliance, register in an EU database, and obtain a CE marking.
  • Low Risk: AI systems interacting with individuals without posing unacceptable or high risks (e.g., chatbots, artistic creations, interactive training). Companies must ensure transparency, informing users that AI generates the content.
  • Minimal Risk: All other AI systems not classified under the previous risks (e.g., video games, spam filters). Companies should establish and apply a voluntary code of conduct.

While clear, the boundaries between these categories, particularly between unacceptable and high risks, are not always distinct. A risk initially considered high due to sensitive data could evolve into an unacceptable risk with technological or strategic changes. Therefore, regular monitoring of AI-related risk mapping is essential.

Building an AI Risk Map

AI systems can be present in all departments of a company, making risk mapping complex. AI can be used in research, production, marketing, communication, and internal management. Hence, creating an AI risk map requires cross-sectoral collaboration within the company, involving researchers, data scientists, legal experts, marketers, managers, and technicians.

Similar to export control obligations for sensitive or dual-use technologies, where researchers and engineers need legal training, AI necessitates comprehensive awareness of legal obligations.

AI … to the Rescue of AI!

Today, over a thousand regulations govern AI usage worldwide, aiming to foster innovation and economic development while protecting users and citizens.

Regulatory systems have cultural or political specificities. Some countries prioritize innovation over human rights protection, while others focus on ex-post accountability. In Europe, the emphasis is on human rights protection through prevention.

For an international company, real-time compliance with the diverse AI regulations across operating territories is crucial. Systems exist that integrate global AI regulations, enabling companies to verify compliance case-by-case.

The rapid emergence of AI applications and evolving regulations necessitate rigorous structuring of risk mapping activities within organizations, supported by adequate financial and human resources.

Share this post