Blog

Thoughts on the latest
in AI

 

 

This is where breakthrough ideas emerge and your inner innovator is awakened. Get inspired by the best of ML6's insights and the minds shaping the future of AI.



Selected:
  • eu act

    Where are we with the AI Act and what does this mean for your AI governance and compliance efforts?

    EU Regulators are considering a Stop-the-Clock on the implementation of the European Union AI Act. Does this mean you should halt your AI governance efforts? Tl;dr: The EU AI Act’s obligations for General Purpose AI models and high-risk AI systems are impending, yet legal uncertainty arises as crucial compliance tools are delayed. Despite "Stop-the-Clock" talks, halting AI governance is ill-advised. Risks are here now, future compliance is vital, and ethical AI simply leads to better products. Proactive steps in AI literacy, AI governance, and keeping track of relevant guidelines and standards are crucial for businesses to navigate the evolving landscape.

  • ai drops

    NVIDIA GTC Paris 2025: Key Announcements

    Jensen Huang’s Keynote: Setting the Stage for Europe’s AI Future At the GTC Paris 2025 conference, NVIDIA CEO Jensen Huang outlined a bold vision for Europe’s new 'intelligence infrastructure,' built on agentic AI, AI factories, sovereign clouds, and industrial AI adoption.

  • Implementing AI Governance: A Focus on Risk Management

    Implementing AI Governance: A Focus on Risk Management

    An Introduction to AI Governance Since the launch of ChatGPT, AI tools have become much more widely used in many organisations. This technology opens up many new opportunities, such as automating customer service or improving content creation. However, it also introduces significant risks. Headlines in press articles frequently mention concerns such as deep fakes , ethical dilemmas around AI replacing artists , legal disputes over copyright between LLM providers and media companies , and privacy issues .

  • The Journey Towards Responsible AI at ML6

    The Journey Towards Responsible AI at ML6

    At ML6, we strongly believe that AI has the potential to do good. However, we recognize that the technology also raises concerns about its impact on society. To unlock AI’s full potential, we need the trust of society, companies and users in the solutions we develop. This trust can only be achieved by aligning the development of AI solutions with ethical values.‍

  • The landscape of LLM guardrails: intervention levels and techniques

    The landscape of LLM guardrails: intervention levels and techniques

    An introduction to LLM guardrails The capacity of the latest Large Language Models (LLMs) to process and produce highly coherent human-like texts opens up a large potential to exploit LLMs for a wide variety of applications, like content creation for blogs and marketing, customer service chatbots, education and e-learning, medical assistance and legal support. Using LLM-based chatbots also has its risks — recently, many incidents have been reported, like chatbots permitting to buy a car for 1$ , guaranteeing airplane ticket discounts , indulging in offensive language use, providing false information, or assisting with unethical user requests like how to build molotov cocktails . Especially when LLM-based applications are taken in production for public use, the importance of guardrails to ensure safety and reliability becomes even more critical.

  • data governance

    The EU Data Governance Act (DGA): an engine for innovation

    An introduction to the the EU Data Governance Act (DGA) As of 24 september 2023, the Data Governance Act (“the DGA”) became applicable. In short, the DGA aims to make more data available and to encourage the voluntary sharing of data. The legislation frames within the EU’s digital strategy to leverage the full potential of data.

Newsletter

Stay up to date