Building trustworthy
AI solutions

As a leader in AI, we work hard to ensure that AI is designed, developed and deployed in the service of the public good. We take our responsibility to build secure, reliable and sustainable technology seriously and have integrated ethics within our way of working.

Developing trustworthy AI solutions

As the development of AI creates new opportunities to improve the lives of people around the world, it is also raising new questions about the best way to build fairness, interpretability, privacy and security into these systems. That’s why we focus on doing the right thing.

We engage with governments, academia and businesses to ensure deployment respects laws and regulations and is ground in human rights. We research the design and technical aspects to ensure the building of explainable, transparent and reliable solutions. We choose AI projects that lead to positive social impacts and have implemented responsible and sustainable business practices internally. We help organizations adopt Trustworthy AI and offer our support where needed.

Our principles

We have identified a set of principles, anchored in fundamental rights,
to guide our work moving forward. These are concrete standards that will govern our research, services and business decisions.

We will assess AI applications in view of the following objectives.
Benefit society

AI technology can and should bring benefit to society. As we consider the potential applications and uses of AI technologies, we will only proceed when we believe that the likely benefits substantially exceed the foreseeable risks and downsides. The environmental and societal impact of a project is always carefully considered.

1
Ensure Technical Robustness

We mitigate the risk of unintended consequences of the applications we build and AI technologies we develop by ensuring they are resilient, secure, safe and reliable. We aspire to high standards of scientific excellence in doing so.

2
Accountability

We hold ourselves to account to work with our clients to put in place the necessary mechanisms to ensure responsibility and accountability of the applications we build.

3
Respect human autonomy

Together with our clients, we seek to empower human beings. We do that by providing detailed explanations of our technologies, appropriate opportunities for feedback and overarching control. Next to that, we ensure that proper oversight mechanisms are in place.

4
Advocate fairness and explicability

In the design of all our applications, we proactively advocate and foster diversity, seek to avoid unfair bias and consider explicability and transparency.

5
Protect personal information and security

Our data security & governance protocols & policies ensure that privacy and the quality, protection and integrity of data is central to everything we do.

6

We believe that technology should only be applied in a trustworthy way and therefore believe in the principle of “open collaboration” on this topic. We are happy to collaborate with other institutions and organizations and to exchange our knowledge, insights and practical experiences in an open book manner.

Our commitment

In addition to the above objectives, ML6 pledges to not design or deploy AI in projects whose principal purpose is to:

Create or improve weapons

E.g. optimizing nuclear weapons

1
Cause any harm to humans

E.g. create a robot that physically attacks humans

2
Contribute to child labour or slavery

E.g. create a robot that hurts people when they are not working hard enough

3
Influence political decisions

E.g. influencing votes

E.g. Influencing politicians to make certain decisions

4
Discriminate

E.g. An algorithm that detects the skin color of people to give warnings to employees of a store if a person with a certain skin color enters their store

5

In the context of dual-use technologies, we commit to guarding against the use of our technology for the use of force.

Commitment to the environment

We aim to minimise the environmental impact of our services by:

Applying research into AI solutions that may help to address global environmental concerns (e.g. traffic optimisation, smart agriculture, weather and climate predictions, ...)

Adopting ways of working and development principles that reduce the footprint of our solutions during development, deployment and use as much as possible (e.g., using pre-trained models, etc.)

Developing powerful technical solutions for our clients that help optimise compute resources

Using green cloud providers wherever possible

Future-proofing Responsible AI

As the methods and possible applications of AI are continuously developing at a massive scale and society’s concept of ethics and the regulation of artificial intelligence is still being shaped, we acknowledge that this area is dynamic and evolving. We are committed to staying up-to-date with the latest evolutions in the technological, legal and ethical fields and to integrate the latest findings and best practices within ML6 as appropriate and adapt as we learn more over time. Discover the work our Ethical AI working group is doing here.

Discover our ethics unit

Implementing our AI principles

We’re constantly developing initiatives, processes, and governance structures to enact our AI principles. We’ll share our progress as we go.

Open collaboration & engagements

In collaboration with governments, civil society, academia, and businesses, we’re working to ensure trustworthy AI by:

Participating in and supporting the policy, legal, and ethical research community, both at local and international levels

Participating in domestic and international bodies and discussion groups

Accompanying governments as they create incentives for the development of AI

To these ends, ML6 and its agents participate in the following: