Blog

Thoughts on the latest
in AI

 

 

This is where breakthrough ideas emerge and your inner innovator is awakened. Get inspired by the best of ML6's insights and the minds shaping the future of AI.



Selected:
  • Station

    Why You Need a GenAI Gateway

    Generative AI is ubiquitous these days, and organizations are rapidly integrating GenAI into their business processes. However, building GenAI applications comes with its own set of specific challenges. The models are often large, meaning inference costs for running these models can quickly get out of hand, and model selection often requires balancing performance against costs and latency. Other common challenges include the misuse of generative models or data leakage. While implementing measures such as rate limiting, monitoring, and guardrailing in your GenAI applications can help overcome these problems, doing so for every individual project brings significant overhead for your engineering teams. It also becomes easy to lose track of global usage of generative AI within your organization and leads to many cases of reinventing the wheel as teams solve the same problems over and over again.

  • ai agents woman

    Unlocking the Power of AI Agents: When LLMs Can Do More Than Just Talk

    Remember J.A.R.V.I.S. from Iron Man? That intelligent assistant that seemed to have a solution for everything? While we’re not quite there yet, the rapid evolution of Large Language Models (LLMs) like GPT-4, Claude, and Gemini is bringing us closer than ever. Today’s LLMs are impressive. They can generate content, translate languages, and even write code. But let’s be real — they’re still pretty much glorified text processors.

  • man and woman looking at screen

    Copilot: RAG Made Easy?

    In recent years, Large Language Models (LLMs) have revolutionised natural language processing by enabling machines to understand and generate human-like text with unprecedented accuracy and coherence. Their applications span across diverse fields such as chatbots and content creation, driving significant advancements in automation and AI-driven solutions. As a result, LLMs have become crucial tools in both academic research and commercial innovation, pushing the boundaries of what AI can achieve. Though, I’m sure you already knew this.

  • Implementing AI Governance: A Focus on Risk Management

    Implementing AI Governance: A Focus on Risk Management

    An Introduction to AI Governance Since the launch of ChatGPT, AI tools have become much more widely used in many organisations. This technology opens up many new opportunities, such as automating customer service or improving content creation. However, it also introduces significant risks. Headlines in press articles frequently mention concerns such as deep fakes , ethical dilemmas around AI replacing artists , legal disputes over copyright between LLM providers and media companies , and privacy issues .

  • The Journey Towards Responsible AI at ML6

    The Journey Towards Responsible AI at ML6

    At ML6, we strongly believe that AI has the potential to do good. However, we recognize that the technology also raises concerns about its impact on society. To unlock AI’s full potential, we need the trust of society, companies and users in the solutions we develop. This trust can only be achieved by aligning the development of AI solutions with ethical values.‍

  • The landscape of LLM guardrails: intervention levels and techniques

    The landscape of LLM guardrails: intervention levels and techniques

    An introduction to LLM guardrails The capacity of the latest Large Language Models (LLMs) to process and produce highly coherent human-like texts opens up a large potential to exploit LLMs for a wide variety of applications, like content creation for blogs and marketing, customer service chatbots, education and e-learning, medical assistance and legal support. Using LLM-based chatbots also has its risks — recently, many incidents have been reported, like chatbots permitting to buy a car for 1$ , guaranteeing airplane ticket discounts , indulging in offensive language use, providing false information, or assisting with unethical user requests like how to build molotov cocktails . Especially when LLM-based applications are taken in production for public use, the importance of guardrails to ensure safety and reliability becomes even more critical.

  • man with glasses looking at screens

    How LLMs access real-time data from the web

    Let’s beat this dead horse one last time: Large Language Models (like GPT, Claude, Gemini, …) have knowledge on a wide range of topics because they’ve been trained on vast amounts of internet data. But once they’re training is complete, their knowledge is fixed. They can’t go for a sneaky little toilet Google-search when they run out of arguments in the middle of a hypothetical discussion with their know-it-all brother-in-law. Or can they?

Newsletter

Stay up to date