Scientific Committee

Artificial Intelligence: The New International Regulatory Environment Between Norm Conflicts and Harmonization

Artificial Intelligence:
The New International Regulatory Environment Between Norm Conflicts and Harmonization

By Philippe Montigny

June 11, 2024


The Need for AI Regulation

Understanding the foundations of Artificial Intelligence (AI) reveals the critical need for our societies to establish a regulatory environment.

The first question is the reliability of AI products and services. AI relies on two pillars: large amounts of data and the learning and computing algorithm that processes this data to produce the intended result, such as texts, analyses, images, sounds, and other creations, as well as proposals and recommendations.

The reliability of an AI-designed product directly depends on the quality of both the data and the learning and computing system. It’s clear that results based on data collected and validated through scientific experiments and a supervised learning system—controlled by experts—would be more reliable and likely superior to human-generated results. Conversely, a product from AI using internet data with an unsupervised learning algorithm would yield results with random reliability.

Providing users or consumers with the knowledge that a product is AI-generated is a transparency obligation that the law must promote. Just as the law requires the food industry to specify the nature of ingredients in their products, AI-related laws should first require companies to disclose if and how they use AI in what they sell or make available to the public.

For example, in the United States, the Securities and Exchange Commission penalized two investment advisory firms in March 2024 for failing to inform their clients that their investment recommendations were generated by predictive AI.

Consequences of Existing Laws

AI’s basic material is data, which may be protected by copyright (as with generative AI based on large language models) or be more sensitive when it involves personal data or private content. The temptation to use AI systems in violation of copyright or personal data protection laws underlines the need for strict legal regulation of these practices.

The upcoming European AI Law of March 18, 2024, aims to regulate AI usage, especially generative AI, and explicitly prohibits AI applications that would infringe on EU values and fundamental rights: social scoring, systematic biometric identification, and content manipulation.

In the U.S., the 2022 White House Blueprint for an AI Bill of Rights outlines the fundamental principles for AI laws, including non-discrimination, privacy, and the right to information about AI products.

The European AI Law (March 2024)

Spanning 459 pages with detailed annexes, the European AI Law adopted on March 13, 2024, results from nearly six years of expert consultations and negotiations among member countries.

The Ethical Challenges of Artificial Intelligence

Artificial Intelligence is not only a matter of legal or compliance domain, far from it! Many companies use Artificial Intelligence systems in conducting their operations: whether in computer programming, medical research, engineering, financial and stock market operations, or even in customer relations.

The proliferation of Artificial Intelligence uses is not without causing concern among employees, company customers, and citizens themselves. The multiplication of “Fakes,” which could be translated as “False Realities” observed in the fields of information, media, images, etc., further reinforces this feeling of concern.

Legal frameworks are beginning to be established to regulate the development of Artificial Intelligence, to allow citizens to distinguish the false from the true and to protect them from deviant applications, such as discriminatory practices or those infringing on privacy or protecting the rights of third parties (especially in terms of intellectual property) on the data used.

But beyond the procedures required today or tomorrow by law, companies should undertake specific actions aimed at their employees to explain the workings of Artificial Intelligence, detail the applications implemented, and specify the measures taken to comply with legal obligations. The challenge is to help employees familiarize themselves with Artificial Intelligence so that they can understand both its power and its limitations.

In particular, it is important to remember that Artificial Intelligence is a tool at the service of the company but that it is not a tool like any other. It is necessary to be aware that Artificial Intelligence offers a result that can induce cognitive bias, linked either to the quality of the data or to the learning process, but also to human interpretation biases such as confirmation bias, attribution bias, or representativeness bias. The user must therefore maintain a critical spirit regarding what Artificial Intelligence proposes, whether in decision-support systems or in the products derived from them (texts, presentations, media, images, etc.). For example, the response offered by generative Artificial Intelligence to the same question posed in Europe, the USA, or China will lead to different answers. In other words, just because Intelligence is Artificial does not mean it is universal!

Given the major transformations that Artificial Intelligence induces in the modes of production and organization of companies, its impact on the relationship of employees to their work and beyond on the customer relationship, it is de facto a subject that touches on the governance of the company, a subject on which its stakeholders will undoubtedly ask it to explain itself and a subject that could well one day be the object of specific extra-financial reporting.

Artificial Intelligence: An Essential Question for the Compliance Officer

Artificial Intelligence is now a subject that the Compliance Officer can no longer ignore for three different but complementary reasons.

First, the multiplication of regulatory obligations and the growth of societal expectations make the responsibility of the Compliance Officer heavier and more complex. This is compounded by the increase in the amount of data they must process, which comes from an acceleration of the digitalization of the company’s operations. Without the assistance of an Artificial Intelligence system, it will be increasingly difficult for them to fulfill their mission with the rigor and speed that the management bodies are entitled to expect from them. Moreover, regulatory and control authorities will quickly differentiate between companies that have invested in compliance management tools and those that have fallen behind in this area. Fifteen years ago, companies that had not invested in appropriate due diligence tools were accused of laxity. What will authorities say – in some time – about companies found guilty of compliance failures and who have not invested in time in efficient compliance tools thanks to artificial intelligence?

Secondly, while Artificial Intelligence is already a commonly used tool in certain sectors (IT, medical research, etc.), the simplification of its uses will allow most companies to rely on its power in their daily operations. The emerging legal framework at the European and American levels imposes a number of obligations on companies… which fall under compliance and thus add to the other subjects that the Compliance Officer must cover: personal data, due diligence, anti-corruption, embargo, etc.

Finally, even though Artificial Intelligence is only at the dawn of its development, we can already see the ethical challenges it raises, some of which are subject to regulatory framing, such as respect for human rights or non-discrimination. However, it is likely that other upcoming developments will pose ethical questions before the legislator takes them up. The vigilance of the Compliance Officer – especially if they are specifically a “Risks, Ethics, and Compliance Officer” – is therefore fundamental to enable the organization to anticipate the upcoming developments of Artificial Intelligence, identify the ethical challenges it will raise, and prepare measures that will allow the company to respond in a relevant way.

Share this post