News & Views

On Regulation of AI Systems in Europe

1 February 2019

Authors: Jesper Nevalainen and Anton Pirinen


Artificial intelligence (“AI”) has become an area of strategic importance and a key driver of economic development. As a result, in order to ensure the competitiveness of the European Union (“EU”) and to shape AI’s development and use in alignment with European values, the European Commission (“Commission”) has recently shifted its focus into the regulation of AI.

The first results of this effort were seen on 18 December 2018, when the EU’s High-Level Expert Group on Artificial Intelligence (“AI HLEG”) proposed draft AI ethics guidelines (“Guidelines”) to the Commission. The AI HLEG is expected to follow up with policy recommendations in May 2019, which will feed into the Commission’s ongoing AI policy development process. The Guidelines introduce a concept of trustworthy AI, which consists of two components: (1) the development, deployment, and use of AI should comply with fundamental rights and applicable regulation and respect core principles and values, ensuring that the AI has an “ethical purpose”, and (2) the AI should be technically robust and reliable. To comply with these two components, an AI system should satisfy the requirements of trustworthy AI as laid down in the Guidelines (“Requirements”).


Requirements of Trustworthy AI

The Requirements particularly emphasize the importance of data governance and transparency in the context of machine learning, which are closely linked to the fundamental rights of privacy, freedom of an individual, equality, and non-discrimination.

Data Governance

Machine learning algorithms identify patterns or regularities in data, and will therefore also follow the patterns resulting from biased or incomplete data sets. As such, discrimination in the context of AI systems can often occur unintentionally due to problems with data. The quality of data is therefore of paramount importance in ensuring that an AI system safeguards fundamental rights. According to the Guidelines, this may be supported, for example, by:

    • Ensuring the inclusion of minorities in data sets, as an incomplete data set may not reflect the target group that it is intended to represent.


    • Not using data against the individuals who provided it.


    • Dividing training and testing sets, including anonymising the data in this way.


    • Particularly in the context of self-learning systems, ensuring the integrity of data gathering, as feeding malicious data into an AI system may alter its behaviour.


    • Keeping record of the data that is fed into AI systems.


An AI system should not be used to restrict the freedom of an individual. While this may sound far-fetched at first, it is currently particularly relevant in the context of recommendation systems. Human beings should remain free to make decisions for themselves. As such, an individual user of an AI system should not be subject to deception or manipulation. Respect for human autonomy can be protected with e.g. the following measures set out in the Guidelines:

    • Providing individuals information in a clear and proactive manner about the use of an AI system, including its capabilities and limitations, also taking into account transparency obligations, for instance those set forth in the GDPR.


    • Ensuring the auditability of AI systems.


    • Designing AI systems to enable tracing individual decisions to their inputs, such as the data and algorithms used.

It should also be noted that fundamental tensions may exist between different requirements. For example, the processing of personal data to identify and correct bias might contrast with the provisions of the GDPR.



In the recent decades, technology has moved faster than the law. But, as we have seen with the GDPR, the legislator tends to eventually catch up. We would therefore encourage the developers and users of AI systems to consider the Requirements, especially when the processing of personal data is involved.

Furthermore, we would like to challenge the AI HLEG and the Commission to evaluate what would be the optimal instrument for regulating AI. Considering that other laws, such as the GDPR, already provide for many aspects of the Requirements, the most suitable method to make AI trustworthy and protect European values globally could be the inclusion of the ethical purpose and technical robustness of AI in the European Convention on Human Rights. In our opinion, the self-regulation of AI systems in the form of e.g. Article 40 “Codes of Conduct” of the GDPR might also be worth considering.