For organizations that want to implement AI in an ethical and secure way.
We believe the full potential of an AI solution can only be achieved if customers, employees and society trust the solution. This requires the consideration of ethical and legal values in the design and development of the solution.
To do so, ML6 has developed a framework based on the EU Ethical guidelines for Trustworthy AI. We support others with our insights through our Ethical Risk Assessments.
During the engagement, we conduct selected interviews with business and technical experts from your side to get a clear picture of the goal and purpose of your AI solution and the technology used
Our ethical AI experts will then analyse the benefits and risks of your specific AI use case, along our proprietary framework
In a final report, we will elaborate on the high-risk dimensions of your specific solution and recommend potential actions to mitigate or limit these risks
We currently see three potential scenarios for the need of such an assessment:
Are you not yet sure of the risks of this AI project and what to pay extra attention to during development? → Embed a risk assessment at the start of your project and incorporate considerations and mitigating actions from the get go
Maybe getting close to deploying it for broader use, and want to build trust in your solution? → Show your customers as well as your employees that you are considering ethical implications and proactively address risks
but some specific concerns have been raised by customers or employees? → Let ML6 conduct an independent review and create the necessary documentation to help mitigate concerns and suggest ways to address risk areas.
incl. executive summary and recommended actions
for future assessments
of regulations, guidelines and best practices