
As the development of AI creates new opportunities to improve the lives of people across the world, it also raises new questions about the best way to build fairness, interpretability, privacy and security into these systems. As a leader in AI, we take our responsibility to build safe, reliable and sustainable technology seriously.
We work hard to ensure that AI is designed, developed and deployed in the service of the public good.
We engage with governments, academia and businesses to ensure deployment respects laws and regulations and is ground in human rights.
We research the design and technical aspects to ensure the building of explainable, transparent and reliable solutions.
We choose AI projects that lead to positive social impacts and have implemented responsible and sustainable business practices internally.
We help organizations adopt Trustworthy AI and offer our support where needed. Responsibility is intrinsic to who we are and what we do.
We have identified a set of principles, anchored in fundamental rights,
to guide our work moving forward. These are concrete standards that will govern our research, services and business decisions.
AI technology can and should bring benefit to society. As we consider the potential applications and uses of AI technologies, we will only proceed when we believe that the likely benefits substantially exceed the foreseeable risks and downsides. The environmental and societal impact of a project is always carefully considered.
We mitigate the risk of unintended consequences of the applications we build and AI technologies we develop by ensuring they are resilient, secure, safe and reliable. We aspire to high standards of scientific excellence in doing so.
We hold ourselves to account to work with our clients to put in place the necessary mechanisms to ensure responsibility and accountability of the applications we build.
Together with our clients, we seek to empower human beings. We do that by providing detailed explanations of our technologies, appropriate opportunities for feedback and overarching control. Next to that, we ensure that proper oversight mechanisms are in place.
In the design of all our applications, we proactively advocate and foster diversity, seek to avoid unfair bias and consider explicability and transparency.
Our data security & governance protocols & policies ensure that privacy and the quality, protection and integrity of data is central to everything we do.
As the methods and possible applications of AI are continuously developing at a massive scale and society’s concept of ethics and the regulation of artificial intelligence is still being shaped, we acknowledge that this area is dynamic and evolving. We are committed to staying up-to-date with the latest evolutions in the technological, legal and ethical fields and to integrate the latest findings and best practices within ML6 as appropriate and adapt as we learn more over time.
We believe that technology should only be applied in a trustworthy way and therefore believe in the principle of “open collaboration” on this topic. We are happy to collaborate with other institutions and organizations and to exchange our knowledge, insights and practical experiences in an open book manner. Contact ethics@ml6.eu for more information.
In addition to the above objectives, ML6 pledges to not design or deploy AI in projects whose principal purpose is to:
Create or improve weapons
Cause any harm to humans
Contribute to child labour or slavery
Influence political decisions
Discriminate
In the context of dual-use technologies, we commit to guarding against the use of our technology for the use of force.
We’re constantly developing initiatives, processes, and governance structures to enact our AI principles. We’ll share our progress as we go.
In collaboration with governments, civil society, academia, and businesses, we’re working to ensure trustworthy AI by:
Participating in and supporting the policy, legal, and ethical research community, both at local and international levels
Participating in domestic and international bodies and discussion groups
Accompanying governments as they create incentives for the development of AI
To these ends, ML6 and its agents participate in the following: