Based in Dubai, United Arab Emirates, The Legal Compass is a blog by Erica Werneman Root. her posts focus on AI and other emerging technology.

The EU publishes draft AI Ethics Guidelines

The EU publishes draft AI Ethics Guidelines

All blog posts are my personal musings and opinions. They are not intended as legal advice.


On 18 December 2018, the European Commission’s high-level expert group on artificial intelligence (the “AI HLEG”) published their draft ethics guidelines setting out the European position on trustworthy AI (the “Guidelines”) The Guidelines are currently open for stakeholder consultation and a final version will be published in March 2019. Click here for the full report.

The Guidelines recognise the transformative and beneficial potential of AI but also cautions against the risks of improper use.

Recently I’ve seen quite a lot of official documents from various watch-groups and government organisations that frequently publish documents listing core values and principles of AI but they have, in my view, lacked both depth and substance by taking a very shallow approach of some of the most important theoretical questions of our time. They have also frequently been devoid of any practical application in the real world. Not so with the new Guidelines. This new document seeks to stress both the fundamental principles that should govern ethical use of AI and, crucially, it also provides a roadmap for implementation. At this stage, the Guidelines are not mandatory but companies can opt to implement the principles on a voluntary basis. Doing so would be a smart move in my opinion.

In an effort to keep this post to a manageable size, I’ve only provided a very high level overview of the new Guidelines below. The guidelines are a significant step and I intend to go over the salient features in more depth in the coming posts.

Speedread the Guidelines

The Guidelines set out the European approach to trustworthy AI by providing a roadmap to maximise the benefits of AI while minimising its risks. The approach adopted by the the AU HLEG includes two major components:

  1. Ethical Purpose - A respect of fundamental rights, applicable regulation, core principles and values; and

  2. Technically Robust - AI must be technically robust and reliable since, even with good intentions, a lack of technological mastery could cause unintended harm.

Each chapter of the Guidelines set out guidance on how the AI HLEG considers that ethical and trustworthy AI can be achieved. I have briefly summarised the content of the various chapters below but I would encourage everyone to read the Guidelines for themselves. It is not a terribly long document and it will offer any lawyer or laws student incredible insights on where new laws and regulations may be heading in the not too distant future.

Chapter 1 - Ensuring Ethical Purpose

AI must be human-centric and developed, deployed and used in accordance with an ethical purpose that is grounded in the “fundamental rights, societal values and ethical principles of beneficence (do good) and nonmaleficence (do no harm), autonomy of humans, justice and explicability.

Developers and companies must proactively evaluate possible effects of AI on human beings and the common good (with particular attention to circumstances where there is unequal power of information such as in employment relationships, and also in respect of vulnerable groups, such as children or minorities).

Finally, the Guidelines implore everyone involved in the development of AI to consider that there may be negative impacts of AI that are not yet fully understood. Everyone should be vigilant for areas of concern.

Chapter 2 - Guidance for realising trustworthy AI

The Guidelines lists key requirements of trustworthy AI that should be implemented at the earliest possible stage. Namely:

  • Accountability

  • Data Governance

  • Design for all

  • Governance of AI autonomy (human oversight)

  • Non-discrimination

  • Respect for human autonomy

  • Respect for privacy

  • Robustness

  • Safety

  • Transparency

It then goes on to consider what technical and the non-technical methods should be used to ensure implementation of the requirements to any AI system at the early stages of development and how information should then communicated to stakeholders (customers, employees etc). One of the key areas of discussion in the Guidelines is the transparency or auditibablity of AI systems. At present, it is sometimes difficult to explain just how an AI system came to a decision or conclusion on a particular matter but in many circumstances those decisions have real world consequences for people so there needs to be a process in place to facilitate what would essentially become an audit of either the AI as a whole, the training data used to teach the AI or perhaps segments of the algorithms themselves. This has been a topic of debate amongst computer scientists and academics for some time (it is sometimes referred to as ‘Algorithm Accountability’).

Finally, for companies dealing with AI, there should be evidence that important issues were raised and discussed, as well as documentary evidence of any important decisions that were made. For instance, there may be tension between different ethical objectives such as data privacy and a desire to identify and correct biases in the system. Stakeholders should be mindful of these tensions and provide evidence of how they were ultimately resolved within the business.

Chapter 3 - Assessing trustworthy AI

The final chapter will arguably be of the most practical use when discussing these issues with developers and tech companies as it sets out an assessment list of issues that should be considered when developing AI. It is not an exhaustive list but it does contain a very good steer on the key issues and provides a framework for the kind of discussions that tech companies should be having internally (and perhaps externally with advisors).

Why should lawyers care?

This is a good question and there are many good answers. The development and use of AI will have very significant implications for humanity on a grand scale but nobody knows how close we are to general AI (i.e. the kind of AI that can complete any task, not just specific tasks such as playing chess or GO, as well or better than a human being). In order to achieve the best possible outcome we need to start discussing where we want the future of AI to take us, and that includings setting frameworks and laws in place to guard against negative outcomes.

For lawyers that are more practically minded, the Guidelines provides a very good indicator of where laws/regulations are likely to be implemented and what the key areas of interest will be. Using this insight you can start having sensible conversations with clients at an early stage and master the topics as they develop.

For law students, the guidelines provide some very interesting topics for further research and essay writing. They also provide a helpful hint at where the profession might be heading. On several occasions the authors hinted at the developing needs of audits and external experts. One that caught my eye was the reference to ‘ethics experts’. If I was starting my legal studies again I would definitely focus on data science and ethics as two key needs that are likely to be in high demand in a few years.

What’s next?

The final version of the Guidelines is due in March 2019 and it will be interesting to see if there are any major amendments following the stakeholder consultation.

Share your thoughts

The guidelines were published as part of a stakeholder consultation. If you have comments on the draft guidelines then I would encourage you to get involved and share those thoughts through the formal process. Here is a link to the EU Commission’s consultation page. The consultation is open until 18 January 2019.

Want to learn more about AI?

AI is a fascinating topic, not just for lawyers but for everyone. Unfortunately, and largely as a result of the technical complexities, the general public has long deferred ethical questions and debates to academics and tech companies. This means that people, in general, have a limited understanding of the issues that are being debated and even less input in these discussions. Thankfully, some very smart individuals are working to change this. If you are interested in AI and some of the big questions that the Guidelines are seeking to address then I would recommend getting one, or both, of the following:

Life 3.0 by Max Tegmark -

Superintelligence by Nick Bostrom -

This is my first ever blog post and I’m working on book reviews for both of the above. Check back soon for links.

Top tip: my favourite way to get through lots of books is to listen to them. Both of the above books are available on Audible.

Life 3.0 Being Human in the age of Artificial Intelligence by Max Tegmark

Life 3.0 Being Human in the age of Artificial Intelligence by Max Tegmark