Enterprise Superintelligence


ML6 presents a world's first: Unum. The Enterprise Superintelligence Platform that transforms your enterprise into a living, learning entity. Giving you the actual competitive edge you’ve always desired
.

ML6_brandmark_orange_rgb_square
ML6 is Europe’s AI Engineering Powerhouse.

ML6 is a frontier AI engineering company, constantly pushing the boundaries of what's possible with AI. We partner with bold leaders to turn cutting-edge AI into lasting business impact. With over a decade of proven expertise, we deliver AI that reshapes business models. AI that is reliable and secure, ensuring a lasting impact. From strategy to delivery, we don't just follow the hype—we build the future.
As we are now doing with Unum, the world's first Enterprise Superintelligence.
ML6 is active in Amsterdam, Berlin, Ghent, and Munich.


P&G
Pfizer
Wienerberger
Walmart
Jonson&Jonson
ING
ASML
Orange
gsk-1
BASF
Media Markt-1
SAPPI-1
Melexis
Holcim
Syngenta
Swiss-krono
sdworx
Unilin
FUNKE
P&G
Pfizer
Wienerberger
Walmart
Jonson&Jonson
ING
ASML
Orange
gsk-1
BASF
Media Markt-1
SAPPI-1
Melexis
Holcim
Syngenta
Swiss-krono
sdworx
Unilin
FUNKE
P&G
Pfizer
Wienerberger
Walmart
Jonson&Jonson
ING
ASML
Orange
gsk-1
BASF
Media Markt-1
SAPPI-1
Melexis
Holcim
Syngenta
Swiss-krono
sdworx
Unilin
FUNKE

TAKE THE LEAD.

You're missing out on growth. Let's fix that, together.

THE WORLD'S FIRST

Enterprise
Superintelligence

Unum will transform your enterprise. It is not just another tool. It is a living system. It flows through your organisation—learning, adapting, and scaling. Every interaction makes it sharper. Every approval makes it stronger. 

SERVICES

AI Engineering

We partner with your technical and innovation teams to navigate complex decisions around AI, data, and cloud architecture—helping you build robust, future-proof solutions that deliver real value from day one.

SERVICES

AI Advisory

Whether you're a C-level executive, innovation lead, or business strategist, we help you confidently shape your AI vision. Transformation can feel overwhelming. That’s why we go beyond technology — guiding adoption, training, and communication so your people and processes truly evolve.

ML6 homepage robot visual

What’s Possible
in Your Industry?

For over a decade, our expert AI engineering teams have been developing the world's most innovative AI solutions from the ground up. Together with industry leaders, we’ve turned bold ideas into some of the most exciting real-world success stories.


Superintelligence
Use Cases

Superintelligence isn’t waiting for your next prompt. It sees the problem, picks a plan, executes, and follows up—like a colleague who not only volunteers for extra work but finishes it before you’ve had your morning coffee.


Our Insights keep you ahead of the pack. 

 

 

The ML6 blog is where breakthrough ideas emerge and your inner innovator is awakened. Get inspired by the best of ML6's insights and the minds shaping the future of AI.

  • The Anatomy of a Lovable App And its boundaries in enterprise software
    Blog

    The Anatomy of a Lovable App And its boundaries in enterprise software

    Executive Summary Lovable makes it possible to go from idea to working application in minutes by generating a full frontend and wiring it directly to a Supabase backend. This speed enables rapid prototyping and early validation, but also introduces architectural trade-offs once these applications move beyond experimentation. This article examines what Lovable generates under the hood when used end to end. Using a simple example application (“Plant Pal”), it breaks down the structure of a Lovable-built system, how the frontend and backend components fit together, and how execution, security, and authorization are handled in practice. It also outlines the consequences of these design choices in areas that matter for longer-lived systems, such as maintainability, network isolation, environment management, observability, and cost transparency. These are presented as trade-offs rather than shortcomings, reflecting an architecture optimized for speed and LLM-driven iteration. This is Part 1 of a two-part technical series. Part 2 builds on this analysis by migrating the same application to cloud-native infrastructure (Azure, AWS, and GCP) and sharing guidance on structuring Lovable projects to support future evolution. The Lovable experience At ML6, we love our offices. We have everything a trendy scale-up needs: a ping pong table, meeting rooms named after James Bond movies, a robot, and of course… plants. Lots of them. However, our green friends are high-maintenance, and it is surprisingly hard to gauge their health just by looking at them (RIP to the plants that didn’t survive my student days). So I do what every sensible dev in 2026 would do: open Lovable. I start from our ML6 branding template, and after a couple of back-and-forths, I end up with Plant Pal: a tiny app that lets you upload a photo of a plant and generates an AI-powered health check. You can view plant history but that’s it. Plant pal Let’s pause here for a second. In 15 minutes we went from vague ideas to a shareable prototype that you can click through and use. It’s the kind of post-transformer bliss that would be hard to explain to any developer before 2022. However, much of the impact happens outside the engineering team. At ML6, Lovable gives our project managers superpowers and allows them to validate ideas with a client fast. Instead of spending days on static mockups, we can now co-create live with our clients. And since the cost of prototyping is near zero, there is little sunk cost when things don’t work out. “This is what neural networks were made for“ — Strelitzia Nicolai Anatomy of a Lovable App Now that we have a functional app, let’s take off the rose-tinted glasses and look at what Lovable actually created for us. The Shared DNA Every Lovable app is built on the same foundations. While each app may have different features, the underlying skeleton is very rigid. This is actually a feature, not a bug. This standardization is what makes Lovable so effective. The frontend consists of standard React running on Vite and TypeScript. UI components are built with shadcn/ui, a collection of customizable components that are copied into the codebase for easy manipulation by the Lovable builder. For the backend, Lovable uses Supabase, a Backend-as-a-Service platform built for speed. Supabase provides four core primitives that Lovable relies on heavily: Storage, Database, Edge Functions and Auth. It also supports real-time subscriptions via WebSockets, enabling live updates without polling. Four Supabase primitives: Storage, Database, Edge functions and Auth For AI, Lovable exposes LLM functionality through a centralized AI Gateway, making it straightforward to add AI features to your apps. Understanding the role of these primitives will be crucial for part 2, where each component will be mapped onto its cloud alternative for Azure, AWS and GCP. Project Structure At a high level, every Lovable app follows the same structure: The key insight is the split in execution context: Everything in src/ is bundled and executed in the user's browser Everything in the supabase/functions runs on Supabase’s servers The frontend is shipped as a static SPA served via CDN hosting, and all Supabase primitives are accessed over public HTTPS endpoints. Let’s see how these components show up in the Plant Pal codebase: The frontend communicates directly to the Supabase primitives through a single shared client, defined in integrations/supabase. The service layer, plantService.ts , wraps this client and exposes clean functions to the frontend. In our case, each function maps cleanly to exactly one Supabase primitive: Now let’s talk about the architectural pattern at play, because it differs from typical enterprise patterns. 2-tier vs 3-tier architecture In a classic 3-tier architecture, requests flow like this: Browser → Application server (API) → Database The Application Server (Node.js, Python..), acts as a gatekeeper: it validates inputs, implements business logic, and strictly controls which database operations are allowed. This separation of concerns makes components easier to secure and scale independently, but it comes at the cost of more code, more infrastructure, and more decisions to maintain. In a classic 2-tier model, the browser talks directly to the database with no custom application server in between. Browser → Database This is simpler to build, but it pushes responsibility for authorization and data access control into the database itself. So where does Lovable + Supabase land? Well, somewhere in between.. Technically, there is a middle layer: Supabase exposes its database via PostgREST (a web server that turns PostgreSQL into a REST API), together with endpoints for auth, storage, and functions. However, this layer is very thin and completely Supabase-managed, you don’t write or control the code. So where does the business logic live? For simple CRUD operations, the frontend handles the UI state and orchestrates the user flows. Here, the frontend communicates directly with the database and storage. Authorization is enforced in the database layer via RLS policies (more on those in a minute). But for anything more complex, like calling external APIs or handling secrets, Lovable will provision Edge Functions that act as mini backends. This simplicity is a big advantage for the Lovable AI builder. You get the simplicity of direct database access for basic operations, and edge functions for server-side logic when you need it. The result is fewer moving parts and fewer architectural choices. This makes it much easier to generate and evolve code. Architecture Overview The diagram below shows the complete architecture of Plant Pal. Become a member The diagram is organized around a useful abstraction: Security Zones. Notice the two colored regions: blue (public client context) and red (secure server context). This distinction is the key to understanding how Lovable apps handles security. Architecture of Plant Pal. Client-side code runs in the user’s browser and communicates to services using publishable keys (blue), protected by RLS policies,. Sensitive operations and secrets are isolated in Supabase Edge Functions (red) Blue Zone The blue zone represents everything the client (browser) can access directly using Supabase’s publishable key. This key is bundled into the JavaScript at build time, which means it is public and inspectable by anyone using your application. I know what you’re thinking: Keys? Publicly available?? This is not a bug. The publishable key is designed to be public. Its access is constrained by Row Level Security (RLS) and storage policies. RLS rules control which rows a user can read or write, e.g. users can only see rows where user_id matches their own. Storage policies work similarly, controlling who can upload or download files. Note that these policies are often generated by Lovable itself. So be aware you’re trusting an LLM to write your security rules. Get any of these wrong, and you risk exposing your data. Misconfigured RLS policies are a common source of data leaks and even have their own CVE vulnerability class. Lovable tries to address this with their security review feature, but the responsibility still falls on the developer. A small note on Auth: When authentication is enabled, users log in via Supabase Auth and receive a JWT containing their user_id . This token is automatically attached to every request. Supabase verifies it and RLS policies reference it via auth.uid(). Red Zone In contrast to the blue zone, the red zone is your safe haven. Everything in the Red Zone runs on Supabase Edge Functions. Users can invoke them, but they cannot see the code inside them. This is where your secret keys live, like the Lovable AI API key for interacting with AI models, and the Supabase Secret Key which bypasses RLS policies entirely. Runtime Flow Now that we understand the security boundaries, we can trace how data moves through the system: The browser uploads an image directly to Storage using the publishable key. The storage policy allows this. The browser invokes the analyze-plant Edge Function The Edge Function calls the Lovable AI Gateway using the LOVABLE_API_KEY . This secret never leaves the server. The result is written to the Database using the secret key. To display the results to the user, the browser reads the plant history directly from the Database The Proof: Finding the Publishable Key If you don’t believe all of the above, you don’t have to take my word for it. Let’s prove it. Let’s open the DevTools tab on Plant Pal and inspect what happens when we navigate to the history page. We see a call to Supabase REST endpoint: Open DevTools Networking to find your publishable key Now inspect the request headers. As suspected, the publishable key is there in plain sight! Although I configured the RLS policies to only allow reads, let me quickly delete the project before this gets published 😉 Up to this point, we mostly looked at how Lovable works. In the next section we will talk about where Lovable fits in an enterprise context. Enterprise Constraints Before diving in, I need to make one important clarification: Lovable is not positioning itself as a full-fledged enterprise application platform (at least not yet). So none of the points below make Lovable “bad”. Instead, they are natural consequences of an architecture optimized for speed and LLM-based iteration. Enterprise software, on the other hand, is optimized for control over change and risk. This means that controlled deployments, testing, stability, compliance, and long term maintainability become much more important than for a prototype. This section explores where those two goals diverge, and as a result, where Lovable fits well, and where it doesn’t. Code Quality and Maintainability While writing this post, I happened to be reading A Philosophy of Software Design by John Ousterhout , and the parallels were hard to ignore. He describes how complexity rarely comes from a single bad decision, but from the accumulation of many small, reasonable shortcuts taken to move fast. Each change works in isolation, but over time the system becomes harder to understand and modify. A similar dynamic can be observed when using Lovable. Lovable evolves applications through a sequence of incremental prompts. Each change is optimized for the immediate request, which introduces a risk of gradually losing sight of the application’s overall structure and intent. Combine that with the limited context window of current LLMs ( context rot ), and the fact that users typically don’t inspect or refactor the underlying codebase, and new changes often end up adding another layer on top of increasingly shaky foundations. Over time, this often leads to a patchwork codebase where teams eventually hit a complexity ceiling. From that point on, new changes require disproportionate effort and often introduce side effects. Lovable excels at implementing functional requirements , the visible capabilities of an app e.g. upload a document or display results. However , e nterprise software is defined just as much by its non-functional requirements: maintainability, reliability, security, scalability and observability. Network Isolation Many enterprises require applications to operate entirely within private networks (VPC/VNet), with strict ingress/egress controls See NIST SP 800-53 Rev. 5, SC-7: Boundary Protection) . By default, all Supabase services are publicly addressable over HTTPS. While Row Level Security controls who can access data, it does not control where that data can be accessed from. For organizations that require network-level isolation,as defined by common enterprise security frameworks, this is often a hard blocker. CI/CD and Environments Enterprise software follows a “build once, deploy many” model, with clearly separated dev, acc, and prod environments. Lovable has no native concept of environments. Each prompt results in a commit that is immediately reflected in the running application. Lovable does offer GitHub sync, so you could build a proper pipeline around the exported code. However, this is not the default workflow, and teams can quickly end up managing a hybrid state between Lovable-driven development and local development. Observability and Cost Transparency Supabase provides logs and basic database metrics through its dashboard, but there is no unified view across the entire stack. This limited visibility also makes it harder to track costs. Both cost of development (credit based system), as well as cost of running the app. The constraints discussed above are not exhaustive. Other enterprise constraints like vendor lock in, long-term ownership, and regulatory compliance are other topics that influence whether Lovable is an appropriate fit for your case. Conclusion In this blog post, we dissected the anatomy of a Lovable app: a standard skeleton using React on the frontend and Supabase primitives on the backend. We explored how the 2-tier like architecture enables rapid development, but pushes security responsibilities to RLS policies that must be carefully reviewed. We also looked at some enterprise constraints like code quality degradation, lack of network isolation and risk of vendor lock in. In Part 2, we will take Plant Pal and migrate it to Azure Cloud (with guidance for AWS and GCP as well) and share tips on setting up your Lovable project from Day 1 to make future migrations easier. “GPU well spent“ — Monstera deliciosa

  • Agent Builders Guide 2026: Managed vs Custom AI Agent Solutions
    Blog

    Agent Builders Guide 2026: Managed vs Custom AI Agent Solutions

    Executive Summary Managed Agent Solutions (MAS) are cloud-managed platforms for building, deploying, and operating AI agents. They remove the need to manually set up infrastructure for orchestration, memory, tooling, tracing, guardrails, and evaluation, allowing teams to reach production faster. MAS have matured from early, UI-focused experiments into platforms that support production workloads across many GenAI use cases. They are not a universal solution. Custom agent solutions are still required when teams need advanced observability, custom evaluation, strict cost control, portability, or complex orchestration.

  • Balancing Speed and Quality in AI-Native Engineering
    Blog

    Balancing Speed and Quality in AI-Native Engineering

    Executive Summary AI-native engineering promises faster delivery, but speed alone does not guarantee quality. As AI-generated code increases developer velocity, many teams experience new bottlenecks around code reviews, system ownership, and shared understanding. Senior engineers are increasingly burdened with review overload, while junior engineers risk shipping code they cannot fully explain. In enterprise environments, balancing speed and quality requires deliberate intent, visible context, and clear ownership. Teams that succeed treat AI as an accelerator within a disciplined engineering process—not as a replacement for decision-making. This article explores the hidden costs of unchecked velocity in AI-native engineering and outlines practical principles for scaling safely without sacrificing code quality or long-term maintainability.


News
& Press.

 

We don’t like to brag — so we let the headlines speak for themselves. Browse through our latest press releases and what others are writing about ML6.

  • How AI is making everyone a programmer: From weeks of work to just day
    News
    In the press
    Data & IT

    How AI is making everyone a programmer: From weeks of work to just day

    AI is redefining software development — faster than many expected. In the article by De Morgen, our Senior Engineer Niels Rogge shares his perspective on the rapid rise of AI-powered software development and the explosion of so-called “vibe coding” — building applications largely generated by AI. Thanks to advances in AI coding agents, tasks that once took weeks can now be completed in days. But speed isn’t everything. As Niels puts it:

  • AI in protein folding: a virtual playground
    News
    In the press

    AI in protein folding: a virtual playground

    AI in protein folding: a virtual playground Protein folding — the way amino acid chains fold into 3D structures — has long been one of the hardest problems in biosciences. With breakthroughs like DeepMind’s AlphaFold, recently awarded the Nobel Prize for Chemistry, predicting these structures is now faster and more accurate than ever.