June 27, 2024

The Journey Towards Responsible AI at ML6

Contributors
Pauline Nissen
Ethical AI Lead
No items found.
Subscribe to newsletter
Share this post

At ML6, we strongly believe that AI has the potential to do good. However, we recognize that the technology also raises concerns about its impact on society. To unlock AI’s full potential, we need the trust of society, companies and users in the solutions we develop. This trust can only be achieved by aligning the development of AI solutions with ethical values. 

This is why we created an Ethical AI Unit in 2021. Our goal, still as pertinent as it was then, is to maximize the value of AI solutions while identifying and mitigating potential ethical risks. 

As promoting transparency to build trust in AI solutions is one of our key Responsible AI principles, we also want to share how we have integrated Responsible AI at ML6.

Responsible AI, Trustworthy AI, Ethical AI, etc.

There are many terms to discuss ethics in the world of AI, such as Responsible AI, Trustworthy AI, Ethical AI, etc. At ML6, we see Responsible AI as an umbrella term for ethical, legal and secure AI

Within ML6, we have the Ethical AI Unit, which addresses ethical questions and risks from the early stage of a project through the delivery. The Security Unit focuses on developing and implementing security best practices across all our projects. Finally, our Legal Counsel handles legal concerns related to the development of AI solutions, covering every legal topic from GDPR to the upcoming EU AI Act.

From theory to practice: Integrating Responsible AI at ML6

Principles

As a leading AI company, we are committed to building AI solutions that are legally compliant, ethically responsible, and technically secure. To guide our research, projects, and business decisions, we have identified a set of 8 principles. These principles ensure that our work aligns with our commitment to Responsible AI.

Red Lines

Additionally, we have established red lines to evaluate projects, not the companies providing them. ML6 will not design or deploy AI for projects whose principal purpose conflicts with our ethical standards. These red lines help us maintain integrity and focus on projects that align with our values.

Identifying Risks and Mitigation Methods Early

From the early stages of every project, starting in the sales process, we actively identify potential risks, including privacy, security, and biases. We also evaluate whether the use case falls under the upcoming EU AI Act. This proactive approach enables us to inform our clients early and provide guidance on effectively mitigating these risks. =

For AI projects with significant ethical implications, we have developed our framework to assess and prioritize potential risks, along with guidelines on mitigation methods.

For more information on our framework and approach, visit our Ethical AI page.

Ethical Sounding Board

Our Ethical Sounding Board is important for discussing ethically sensitive projects that emerge during the sales process. Ethical questions rarely have one right answer, so it's important to consider diverse perspectives. Our Ethical Sounding Board is designed with diversity in mind, including members from different units at ML6, ranging from Engineers to Sales to HR, and representing different geographical locations, backgrounds, and genders.

For more information on our ethical sounding board, visit our Ethical AI page.

Internal Awareness

Integrating a Responsible AI mindset and best practices is a gradual process. We undertake several initiatives to raise awareness among our employees. From onboarding new hires to conducting monthly quizzes on new Responsible AI concept s, as well as creating content and sharing knowledge, we ensure our team understands and trusts the processes we have established.

External Awareness

We also actively participate in the public debate about Responsible AI, including discussions on the upcoming AI Act. We share our knowledge through webinars, blog posts, and public events. For example, last month, we participated in the AI event of Digital Flanders to discuss the implications of the AI Act in the public sector and share our insights.

A final word

Responsible AI should be an integral part of every AI solution. At ML6, we strive to develop solutions that are legally compliant, ethically responsible, and technically secure. Our continuous efforts to integrate Responsible AI into our processes highlight our commitment, and we are dedicated to continuously sharing our knowledge about Responsible AI.

If you are interested in AI Governance and want to integrate Responsible AI principles into your organization’s processes, reach out to us today! Together, we can ensure your AI initiatives are ethical, legal, secure, and valuable for your organization.

Related posts

View all
No results found.
There are no results with this criteria. Try changing your search.
Large Language Model
Foundation Models
Corporate
People
Structured Data
Chat GPT
Sustainability
Voice & Sound
Front-End Development
Data Protection & Security
Responsible/ Ethical AI
Infrastructure
Hardware & sensors
MLOps
Generative AI
Natural language processing
Computer vision