Our Chief Strategy Officer, Julie Scherpenseel, and Customer Engineer, Caroline Adam, talking about Ethical AI and why you should care about ethics.
This interview addresses the following topics concerning Ethical AI:
<orange> ■ <orange> What is Ethical AI ?
<orange> ■ <orange> What are the main challenges of Ethical AI?
<orange> ■ <orange> Why is it important for organizations to think about Ethical AI?
<orange> ■ <orange> How can organisations go about considering Ethical AI?
To conclude, Ethical AI is an important topic to think about for any company considering to or already working with AI. It requires time and resource commitment to build the right processes and operationalize ethical risk management.
Julie: With the rapid advances in the AI field, there are also growing ethical concerns about the use of AI and its impact on society. Can you briefly explain what we mean with Ethical AI?
Caroline: We all know the term Ethics, defined as the moral principles that govern a person’s behaviour and actions. When we talk about Ethical AI, we refer to applying these ethical values to the design and application of Artificial Intelligence. This means not only asking ourselves the question ‘Can we build an AI solution’, but also ‘How can we make sure to build it in a robust and ethical way?’, and sometimes even ‘Should we build this solution?’.
From these two questions you can already see that we can actually look at the topic from two levels: on the higher level, we’re talking about the goal of the AI solution - is the AI being used for good, or is it used for a bad purpose? On the more tactical level, even within AI that intends to ‘do good’, we need to ask ourselves if the solution is built in an ethical way or whether it might unintentionally cause harm to someone.
Julie: So it’s both about using AI for ethical purposes, and making sure that we don’t cause unintentional harm. And what would then be the main challenges with Ethical AI?
Caroline: Ethics are inherently personal, there is often no ‘right answer’. This makes it challenging to define actionable rules and a common understanding. But of course, as you said, some concerns are coming up on the impact AI has, thinking for example of Deepfakes potentially influencing opinions, or solutions that might be biased, etc. This means that there is a need to define common rules or regulations on how to deal with these concerns. At the same time, and here comes another challenge, we need to make sure that we can define regulation and an ethical framework that does not hinder innovation.
Julie: These are for sure challenges we will need to address as a society. If we look at the organisational level, why is it important for organisations to think about Ethical AI?
Caroline: There are two ways to look at it - the benefits of considering the ethics view, and the dangers of not doing so.
Let’s first look at the benefits of building, what we call, building trustworthy AI solutions. We believe that AI has the potential to do a lot of good, from creating business value and economic growth, to even help tackle major societal challenges, such as climate change, mobility and health. But in order to realize the benefits of AI, the technology needs to be adopted, which can only happen if customers, employees and society trust in the AI solutions. And this kind of trust we can only create by considering ethics when developing AI.
On the other side, there are also dangers of not thinking about the topic. More and more, organisations are realizing that failing to take trustworthiness of AI into account can be a threat to the bottom line. Reputational concerns, wasted resources, in the future also non-compliance with the upcoming regulation, to name just a few of those threats, can negatively affect the business value of AI.
Julie: So you mentioned before that there are some risks associated with AI solutions that can cause unintentional harm. Can you give an example of that?
Caroline: Think for example of an AI solution that predicts the most suitable job applicants for, let’s say an engineering position. What we will do in AI, or especially Machine Learning, is taking a lot of data from the past, that means features of candidates that have applied to similar positions in the past and whether they have been hired. Now, feeding the ML solution with this data from the past, the algorithm might learn that male applicants have been considered more often for the engineering position, and thus matches male applicants to it in the future. Here, training on biased historical data introduces a bias in our predictions. Such a bias needs to be identified and corrected for before a solution is being put into production.
Of course bias & fairness is just one element, or dimension, in trustworthy AI. There are a lot more considerations, for example accountability - defining who is responsible if something happens, explainability - knowing what’s happening inside the model and explaining its outcomes, robustness - making sure we build robust and safe technical solutions, and privacy - dealing with data in an ethical way.
Julie: How can organisations go about considering Ethical AI?
Caroline: On a structural level, there are three main starting points. First, it is important for organisations to define and operationalize their Ethical AI principles, as well as red lines. What do I mean by that - defining the principles that you as a company want to adhere to when building AI solutions, and defining purposes or uses of AI that you will not pursue. To give you an example, one of our 6 principles is to advocate fairness and explainability, which we incorporate into every project. The same goes for red lines, at ML6 for example one of our red lines is to not build solutions that influence political decisions.
Secondly, organisations need to work out integer processes on how to address ethical AI, including a clear idea on how to identify, evaluate and mitigate risks systematically. This also means dedicating time and resources to thinking about Ethical AI. Also there needs to be clear processes on how to elevate concerns to leadership.
Thirdly, of course in order to identify the risks, there needs to be an understanding in the organisation of what the risks are. There is quite a lot of change management involved in this, to create awareness around the principles, the processes and expectations within the whole organisation. From the technical people implementing AI solutions, to Sales, to HR and Marketing - everyone needs to be aware of potential risks and feel confident to raise concerns. Of course, incentives for employees need to be aligned with the ethical actions you want to encourage.
Julie: So this is what organisations can do on a structural level. Now, if as a company I have a specific use case that I want to tackle with AI, how can I go about identifying and mitigating risks for such a specific case?
Caroline: On a more tactical level, so for example for a specific AI solution, organisations need to provide guidance to their employees and document risks and decisions taken. It is important to have a framework in place along which it is possible to assess benefits and potential risks, depending on the context of a specific AI solution and industry. This helps both with guiding employees to think about the right dimensions and the right questions to consider.
Then there are also tools and methods being developed more and more for looking into each of the components of a solution. Take for example explainability - using Shapley values, or tools such as InterpretML, can help bring ‘light’ into a blackbox model, and help us understand what features the model considered important.
Julie: If I understand correctly it’s also important from a regulation point of view to look at these risks and document them.
Caroline: Exactly, of course there is also the EU regulation on Artificial Intelligence coming in the future, which will have an impact on companies that build AI solutions. It’s important already and will be even more important, especially for high-risk applications, to document that you have considered the risks, that you have asked yourself the right questions and are taking actions to mitigate potential risks.
Julie: Do you see many organizations already considering Ethical AI?
Caroline: We’re at the beginning of the journey, but especially in sectors that have a higher risk, which typically are those with direct influence on people’s lives, such as HR, education, or law enforcement, there is growing awareness around the topic. If you think more about the bigger Tech companies, they are quite active in the topic, developing their approach to trustworthy AI, defining their principles, and also creating tools to help develop trustworthy solutions (e.g. applications to assess bias in your models). With the upcoming regulation by the EU, the topic will become more important for many companies, especially if they are in one of the high risk sectors as classified by the EU.
Julie: And at ML6, what do we do around the topic?
Caroline: As a leader in AI in Europe, we believe that it’s our responsibility to also help shape the environment. We’re closely following the recent developments and are actively engaged in the topic, and also want to help guide our clients through the upcoming changes. Internally we have our framework in place on how to assess the risks of solutions that we build, which is something we also do for our clients with our AI Ethical Risk Assessment Advisory offering. And finally, of course at ML6 we build ML solutions, and for all our projects we integrate our ethical AI principles, so that clients can rely on us for building trustworthy applications.
Julie: Is there anything else you would like to add?
Caroline: Maybe just that Ethical AI is an interesting and important topic, and for any company considering or working with AI it is important to dedicate some time and resources to the ethics side of AI.
Julie: Thank you Caroline. With this we hope you have a good understanding of what Ethical AI is and how you can start digging into the topic.
>> If you would like to learn more about this topic and our AI Risk Assessment Advisory offering, click here.