The Artificial Intelligence Act (AIA)

The Artificial Intelligence Act (AIA), is a law (in draft mode) on artificial intelligence (AI) released by the EU Commission as part of the EU Digital Strategy.

This draft contains proposals for regulating the use of AI in research and economy. The AIA affects providers who bring AI systems into circulation or operation in the Union, as well as users of AI systems located in the Union, including private individuals. The regulations also apply to actors based in a third country, as soon as the AI is used in the Union. The regulation defines AI very broadly, including machine learning, deep learning, logic- and knowledge-based concepts, as well as statistical approaches​1​.

The AIA classifies AI applications into risk groups based on the potential risk associated with the application area of AI. These risk groups are divided into unacceptable risk, high risk, and low or minimal risk. The stricter the regulations become, the higher the risk. AI applications of the unacceptable risk group are prohibited, including those that could manipulate human behavior or enable biometric identification of individuals.

For high-risk AI systems, there are specific obligations, such as setting up a risk management system, complying with data governance procedures, and human supervision. AI systems that pose a lower manipulation risk are subject to minimal transparency and information obligations.

The EU clearly indicates that it is aware of the danger that AI applications can be misused to influence and monitor people. The structuring of the draft provides a suitable basis for further refinements. It is recommended that IT companies and developers familiarize themselves with the upcoming regulations at an early stage and keep an eye on the developments in legislation.