Blog

Thoughts on the latest
in AI

 

 

This is where breakthrough ideas emerge and your inner innovator is awakened. Get inspired by the best of ML6's insights and the minds shaping the future of AI.



Selected:
  • eu act

    Where are we with the AI Act and what does this mean for your AI governance and compliance efforts?

    EU Regulators are considering a Stop-the-Clock on the implementation of the European Union AI Act. Does this mean you should halt your AI governance efforts? Tl;dr: The EU AI Act’s obligations for General Purpose AI models and high-risk AI systems are impending, yet legal uncertainty arises as crucial compliance tools are delayed. Despite "Stop-the-Clock" talks, halting AI governance is ill-advised. Risks are here now, future compliance is vital, and ethical AI simply leads to better products. Proactive steps in AI literacy, AI governance, and keeping track of relevant guidelines and standards are crucial for businesses to navigate the evolving landscape.

  • Implementing AI Governance: A Focus on Risk Management

    Implementing AI Governance: A Focus on Risk Management

    An Introduction to AI Governance Since the launch of ChatGPT, AI tools have become much more widely used in many organisations. This technology opens up many new opportunities, such as automating customer service or improving content creation. However, it also introduces significant risks. Headlines in press articles frequently mention concerns such as deep fakes , ethical dilemmas around AI replacing artists , legal disputes over copyright between LLM providers and media companies , and privacy issues .

  • The Journey Towards Responsible AI at ML6

    The Journey Towards Responsible AI at ML6

    At ML6, we strongly believe that AI has the potential to do good. However, we recognize that the technology also raises concerns about its impact on society. To unlock AI’s full potential, we need the trust of society, companies and users in the solutions we develop. This trust can only be achieved by aligning the development of AI solutions with ethical values.‍

  • The landscape of LLM guardrails: intervention levels and techniques

    The landscape of LLM guardrails: intervention levels and techniques

    An introduction to LLM guardrails The capacity of the latest Large Language Models (LLMs) to process and produce highly coherent human-like texts opens up a large potential to exploit LLMs for a wide variety of applications, like content creation for blogs and marketing, customer service chatbots, education and e-learning, medical assistance and legal support. Using LLM-based chatbots also has its risks — recently, many incidents have been reported, like chatbots permitting to buy a car for 1$ , guaranteeing airplane ticket discounts , indulging in offensive language use, providing false information, or assisting with unethical user requests like how to build molotov cocktails . Especially when LLM-based applications are taken in production for public use, the importance of guardrails to ensure safety and reliability becomes even more critical.

  • open ai

    Why Open AI's API models can't be forced to behave deterministically

    ChatGPT, Dall-E, ... can't be forced to behave deterministically A donkey never hits its head on the same stone twice. And it seems this is a thing that ChatGPT and donkeys have in common: ChatGPT is no fool, it doesn’t always make the same mistake. Instead it makes a different mistake each time. Nice? Try forcing it to always make the same mistake. Bummer: that’s just not possible.

Newsletter

Stay up to date