Artificial Intelligence, Scientific Committee

Artificial Intelligence: Both a Tool and an Object of Compliance

Artificial Intelligence:
Both a Tool and an Object of Compliance

By Smart Global Governance’s Scientific Committee

March 19, 2024


Under what conditions is Artificial Intelligence a reliable tool?

To assess the opportunities and risks presented by the professional use of Artificial Intelligence, it is essential to remember that any AI system is based on two elements: sufficient data and an algorithm that, after a period of progressive learning, will provide an analysis.

This learning can be supervised – that is, under human control – semi-supervised, or unsupervised. It can also be reinforcement learning, based on trial and error logic, or transfer learning, using knowledge acquired in one domain and applying it to another.

It is clear that the quality of an Artificial Intelligence system, in other words, its reliability, depends on the quantity and quality of the data on one hand, and on the other, the quality of the algorithm’s learning mode.

 

Artificial Intelligence in the Service of Compliance

Within a company, compliance-related data can be considered high quality. There are data that have been entered with human oversight, through forms, internal or external, developed by the company or integrated into an information system subject to certain entry controls, depending on the desired outcomes: data on employees (roles, training, gifts-invitations, travel, etc.), data on suppliers and service providers (types of contractual relationships, due diligence, etc.), legal, regulatory, or normative data (obligations, procedures, standards), and of course, customer data. These data, also subject to data protection obligations such as GDPR in Europe, form a solid knowledge base for reliable and legitimate exploitation.

The learning system can be easily standardized and controlled by compliance sector experts. By using an Artificial Intelligence system, the Compliance Officer can initially evaluate the quality of the results provided by the algorithm… and if necessary, improve its learning logic.

Once a company can have reliable Artificial Intelligence in the field of compliance, this new tool allows the Compliance Officer to master several of the difficulties encountered in carrying out their responsibilities, in particular:

  1. The multiplication of compliance obligations (GDPR, anti-corruption, embargo, etc.),
  2. The diversity of legal frameworks from one country to another (Europe, United States, China, etc.),
  3. The diversity of legal frameworks from one country to another (Europe, United States, China, etc.),
  4. The imperative of regularly updated risk mapping,
  5. And the growing need for reporting to various internal (Executive Committee, Audit Committee, Ethics Committee, unions, etc.) and external audiences (shareholders, financial institutions, partners, regulatory authorities, etc.).

Thus freed from many routine tasks, verification, and drafting, the Compliance Officer can focus on their core business: risk analysis, designing means to control them, and monitoring the implementation of their compliance program.

Thus conceived, Artificial Intelligence becomes a decision-support tool for the Compliance Officer, a tool that allows them to contribute more securely and confidently to the strategic decisions of their organization.

The Ethical Challenges of Artificial Intelligence

Artificial Intelligence is not only a matter of legal or compliance domain, far from it! Many companies use Artificial Intelligence systems in conducting their operations: whether in computer programming, medical research, engineering, financial and stock market operations, or even in customer relations.

The proliferation of Artificial Intelligence uses is not without causing concern among employees, company customers, and citizens themselves. The multiplication of “Fakes,” which could be translated as “False Realities” observed in the fields of information, media, images, etc., further reinforces this feeling of concern.

Legal frameworks are beginning to be established to regulate the development of Artificial Intelligence, to allow citizens to distinguish the false from the true and to protect them from deviant applications, such as discriminatory practices or those infringing on privacy or protecting the rights of third parties (especially in terms of intellectual property) on the data used.

But beyond the procedures required today or tomorrow by law, companies should undertake specific actions aimed at their employees to explain the workings of Artificial Intelligence, detail the applications implemented, and specify the measures taken to comply with legal obligations. The challenge is to help employees familiarize themselves with Artificial Intelligence so that they can understand both its power and its limitations.

In particular, it is important to remember that Artificial Intelligence is a tool at the service of the company but that it is not a tool like any other. It is necessary to be aware that Artificial Intelligence offers a result that can induce cognitive bias, linked either to the quality of the data or to the learning process, but also to human interpretation biases such as confirmation bias, attribution bias, or representativeness bias. The user must therefore maintain a critical spirit regarding what Artificial Intelligence proposes, whether in decision-support systems or in the products derived from them (texts, presentations, media, images, etc.). For example, the response offered by generative Artificial Intelligence to the same question posed in Europe, the USA, or China will lead to different answers. In other words, just because Intelligence is Artificial does not mean it is universal!

Given the major transformations that Artificial Intelligence induces in the modes of production and organization of companies, its impact on the relationship of employees to their work and beyond on the customer relationship, it is de facto a subject that touches on the governance of the company, a subject on which its stakeholders will undoubtedly ask it to explain itself and a subject that could well one day be the object of specific extra-financial reporting.

Artificial Intelligence: An Essential Question for the Compliance Officer

Artificial Intelligence is now a subject that the Compliance Officer can no longer ignore for three different but complementary reasons.

First, the multiplication of regulatory obligations and the growth of societal expectations make the responsibility of the Compliance Officer heavier and more complex. This is compounded by the increase in the amount of data they must process, which comes from an acceleration of the digitalization of the company’s operations. Without the assistance of an Artificial Intelligence system, it will be increasingly difficult for them to fulfill their mission with the rigor and speed that the management bodies are entitled to expect from them. Moreover, regulatory and control authorities will quickly differentiate between companies that have invested in compliance management tools and those that have fallen behind in this area. Fifteen years ago, companies that had not invested in appropriate due diligence tools were accused of laxity. What will authorities say – in some time – about companies found guilty of compliance failures and who have not invested in time in efficient compliance tools thanks to artificial intelligence?

Secondly, while Artificial Intelligence is already a commonly used tool in certain sectors (IT, medical research, etc.), the simplification of its uses will allow most companies to rely on its power in their daily operations. The emerging legal framework at the European and American levels imposes a number of obligations on companies… which fall under compliance and thus add to the other subjects that the Compliance Officer must cover: personal data, due diligence, anti-corruption, embargo, etc.

Finally, even though Artificial Intelligence is only at the dawn of its development, we can already see the ethical challenges it raises, some of which are subject to regulatory framing, such as respect for human rights or non-discrimination. However, it is likely that other upcoming developments will pose ethical questions before the legislator takes them up. The vigilance of the Compliance Officer – especially if they are specifically a “Risks, Ethics, and Compliance Officer” – is therefore fundamental to enable the organization to anticipate the upcoming developments of Artificial Intelligence, identify the ethical challenges it will raise, and prepare measures that will allow the company to respond in a relevant way.

Share this post