AI Engineering

BUILD YOUR BOLD VISION WITH SOLID AI.

For technology leaders like you, the challenge is no longer if AI can create value—it’s how to make it enterprise-ready, scalable, and future-proof. That’s where ML6 comes in. We combine technical excellence with scalable engineering to solve your toughest challenges and build your boldest ideas.

Zoom in for code

How we bring your
vision to life?

At ML6, we don’t stop at prototypes. We engineer AI solutions that work reliably at scale, integrate with your enterprise stack, and continue to deliver value long after deployment.
  • Focus on Business Impact

    Our primary aim is to ensure our AI solutions drive tangible value and actual utilization within your core business activities.

  • Deep AI-native expertise

    We combine deep technical expertise with pragmatic delivery to design, deploy, and run secure AI solutions that accelerate your strategy, unlock your data, and deliver business impact.

FIRST, WE 
MAKE A PLAN.

Software engineers working on project and programming in company. Startup business teamwork concept By nd3000_SQ0

AI Business Advisory: We help you turn your AI vision into impact— pinpointing high-impact use cases and guiding people, governance, and architecture to accelerate execution.

Read more

AI Technical Advisory - We help your teams make the right AI, data, and cloud decisions—building robust, future-proof solutions that deliver value from day one.

Read more

THEN WE
BUILD.

AI lives or dies by data—so we start there. From there, we design and optimize systems you can rely on for business-critical performance. We focus continuously on improving performance. We ensure your AI integrates seamlessly into existing systems—strengthening rather than disrupting operations.  All this with enterprise-grade security, built in from day one.

ML6 Amsterdam Office september 2025-28

Source internal and external datasets with a focus on usability and relevance. Where needed, we can create synthetic data.

Label data efficiently with a mix of automation, tooling, and managed services.

Apply anonymization and pseudonymization to ensure compliance and enable safe cloud processing.

Implement rigorous data quality management and augmentation techniques to strengthen training sets.

Set up cloud-native data warehouses on Azure, AWS and Google Cloud Platform that serve as scalable, AI-ready backbones.

ML6 Gent Office august 2025-038

TEST, DEPLOY
& SCALE.

Once an architecture is built, the real challenge begins — getting it into production at scale. We ensure that our AI doesn’t stall at the proof-of-concept stage.

We implement Continuous Integration (CI) and Continuous Delivery/Deployment (CD) pipelines, retraining loops, monitoring, and governance frameworks that make AI production-ready and sustainable.

RUN & MAINTAIN 

AI isn’t a one-off project—it’s a living system that requires monitoring, optimization, and support.

  • We provide secure, cloud-native hosting and continuously track model performance, retraining as needed to maintain accuracy and reliability
  • Managed services on AWS, Azure, and Google Cloud Platform
  • Whether your strategy is single-cloud, multi-cloud, or hybrid, we deliver managed services on the world’s leading platforms, always aligned with enterprise standards

100% custom

Your strategy isn’t
off-the-shelf.
Neither is our AI.

With us, there’s no "that’s not part of the product vision” and no awkward “we don’t integrate with that.” Our AI is built the way your company actually runs, not the way a product manager thinks it should. Every solution is tailor-made to fit your strategy and company like a glove—no compromises, no roadblocks, no missed opportunities. Just the exact intelligence your organisation needs to move faster, smarter, and stronger.

Why ML6?

When the margin for error is small, you need a partner with the correct depth, experience, and mindset.

  • Visionary leaders love us

    From BASF and Randstad to Sappi and Walmart, global enterprises rely on ML6 to deliver AI that works—technically and strategically. We have 13 years of AI native experience and have deployed over 450 complex use cases. Our customers give us an average 9/10 NPS score. Many have done several projects with us.

  • We Lead in AI

    We’re the ones who do, not just talk. You benefit from top-tier AI engineering rooted in deep technical knowledge that is 3–5 years ahead of the market. For example, we launched our first GenAI solution in 2019 and continue to lead in complex deployments. We go beyond the hype to deliver future-proof solutions aligned with your business DNA.

  • Depth & expertise

    Over 90% of our team holds a Master’s degree or a PhD. They also allocate 17% of their time to testing emerging technologies. We’re advanced partners across all major cloud platforms, having direct access to engineers at AWS, Azure, Google Cloud Platform, and OpenAI—bringing the latest tech to your business, responsibly.

  • Value-first, co-creative approach

    We work side-by-side with your team to align business and tech, focusing on the few initiatives that truly move the needle. Our agile team model combines a stable core with just-in-time expertise.

Get
inspired

Stay in the loop with stories from our team, success cases from our clients, and news on what we’re building behind the scenes. Whether it's a deep dive into the latest in AI, a behind-the-scenes look at a real-world implementation, or product updates we’re proud of — here’s what’s been happening lately.

  • The Anatomy of a Lovable App And its boundaries in enterprise software
    Blog

    The Anatomy of a Lovable App And its boundaries in enterprise software

    Executive Summary Lovable makes it possible to go from idea to working application in minutes by generating a full frontend and wiring it directly to a Supabase backend. This speed enables rapid prototyping and early validation, but also introduces architectural trade-offs once these applications move beyond experimentation. This article examines what Lovable generates under the hood when used end to end. Using a simple example application (“Plant Pal”), it breaks down the structure of a Lovable-built system, how the frontend and backend components fit together, and how execution, security, and authorization are handled in practice. It also outlines the consequences of these design choices in areas that matter for longer-lived systems, such as maintainability, network isolation, environment management, observability, and cost transparency. These are presented as trade-offs rather than shortcomings, reflecting an architecture optimized for speed and LLM-driven iteration. This is Part 1 of a two-part technical series. Part 2 builds on this analysis by migrating the same application to cloud-native infrastructure (Azure, AWS, and GCP) and sharing guidance on structuring Lovable projects to support future evolution. The Lovable experience At ML6, we love our offices. We have everything a trendy scale-up needs: a ping pong table, meeting rooms named after James Bond movies, a robot, and of course… plants. Lots of them. However, our green friends are high-maintenance, and it is surprisingly hard to gauge their health just by looking at them (RIP to the plants that didn’t survive my student days). So I do what every sensible dev in 2026 would do: open Lovable. I start from our ML6 branding template, and after a couple of back-and-forths, I end up with Plant Pal: a tiny app that lets you upload a photo of a plant and generates an AI-powered health check. You can view plant history but that’s it. Plant pal Let’s pause here for a second. In 15 minutes we went from vague ideas to a shareable prototype that you can click through and use. It’s the kind of post-transformer bliss that would be hard to explain to any developer before 2022. However, much of the impact happens outside the engineering team. At ML6, Lovable gives our project managers superpowers and allows them to validate ideas with a client fast. Instead of spending days on static mockups, we can now co-create live with our clients. And since the cost of prototyping is near zero, there is little sunk cost when things don’t work out. “This is what neural networks were made for“ — Strelitzia Nicolai Anatomy of a Lovable App Now that we have a functional app, let’s take off the rose-tinted glasses and look at what Lovable actually created for us. The Shared DNA Every Lovable app is built on the same foundations. While each app may have different features, the underlying skeleton is very rigid. This is actually a feature, not a bug. This standardization is what makes Lovable so effective. The frontend consists of standard React running on Vite and TypeScript. UI components are built with shadcn/ui, a collection of customizable components that are copied into the codebase for easy manipulation by the Lovable builder. For the backend, Lovable uses Supabase, a Backend-as-a-Service platform built for speed. Supabase provides four core primitives that Lovable relies on heavily: Storage, Database, Edge Functions and Auth. It also supports real-time subscriptions via WebSockets, enabling live updates without polling. Four Supabase primitives: Storage, Database, Edge functions and Auth For AI, Lovable exposes LLM functionality through a centralized AI Gateway, making it straightforward to add AI features to your apps. Understanding the role of these primitives will be crucial for part 2, where each component will be mapped onto its cloud alternative for Azure, AWS and GCP. Project Structure At a high level, every Lovable app follows the same structure: The key insight is the split in execution context: Everything in src/ is bundled and executed in the user's browser Everything in the supabase/functions runs on Supabase’s servers The frontend is shipped as a static SPA served via CDN hosting, and all Supabase primitives are accessed over public HTTPS endpoints. Let’s see how these components show up in the Plant Pal codebase: The frontend communicates directly to the Supabase primitives through a single shared client, defined in integrations/supabase. The service layer, plantService.ts , wraps this client and exposes clean functions to the frontend. In our case, each function maps cleanly to exactly one Supabase primitive: Now let’s talk about the architectural pattern at play, because it differs from typical enterprise patterns. 2-tier vs 3-tier architecture In a classic 3-tier architecture, requests flow like this: Browser → Application server (API) → Database The Application Server (Node.js, Python..), acts as a gatekeeper: it validates inputs, implements business logic, and strictly controls which database operations are allowed. This separation of concerns makes components easier to secure and scale independently, but it comes at the cost of more code, more infrastructure, and more decisions to maintain. In a classic 2-tier model, the browser talks directly to the database with no custom application server in between. Browser → Database This is simpler to build, but it pushes responsibility for authorization and data access control into the database itself. So where does Lovable + Supabase land? Well, somewhere in between.. Technically, there is a middle layer: Supabase exposes its database via PostgREST (a web server that turns PostgreSQL into a REST API), together with endpoints for auth, storage, and functions. However, this layer is very thin and completely Supabase-managed, you don’t write or control the code. So where does the business logic live? For simple CRUD operations, the frontend handles the UI state and orchestrates the user flows. Here, the frontend communicates directly with the database and storage. Authorization is enforced in the database layer via RLS policies (more on those in a minute). But for anything more complex, like calling external APIs or handling secrets, Lovable will provision Edge Functions that act as mini backends. This simplicity is a big advantage for the Lovable AI builder. You get the simplicity of direct database access for basic operations, and edge functions for server-side logic when you need it. The result is fewer moving parts and fewer architectural choices. This makes it much easier to generate and evolve code. Architecture Overview The diagram below shows the complete architecture of Plant Pal. Become a member The diagram is organized around a useful abstraction: Security Zones. Notice the two colored regions: blue (public client context) and red (secure server context). This distinction is the key to understanding how Lovable apps handles security. Architecture of Plant Pal. Client-side code runs in the user’s browser and communicates to services using publishable keys (blue), protected by RLS policies,. Sensitive operations and secrets are isolated in Supabase Edge Functions (red) Blue Zone The blue zone represents everything the client (browser) can access directly using Supabase’s publishable key. This key is bundled into the JavaScript at build time, which means it is public and inspectable by anyone using your application. I know what you’re thinking: Keys? Publicly available?? This is not a bug. The publishable key is designed to be public. Its access is constrained by Row Level Security (RLS) and storage policies. RLS rules control which rows a user can read or write, e.g. users can only see rows where user_id matches their own. Storage policies work similarly, controlling who can upload or download files. Note that these policies are often generated by Lovable itself. So be aware you’re trusting an LLM to write your security rules. Get any of these wrong, and you risk exposing your data. Misconfigured RLS policies are a common source of data leaks and even have their own CVE vulnerability class. Lovable tries to address this with their security review feature, but the responsibility still falls on the developer. A small note on Auth: When authentication is enabled, users log in via Supabase Auth and receive a JWT containing their user_id . This token is automatically attached to every request. Supabase verifies it and RLS policies reference it via auth.uid(). Red Zone In contrast to the blue zone, the red zone is your safe haven. Everything in the Red Zone runs on Supabase Edge Functions. Users can invoke them, but they cannot see the code inside them. This is where your secret keys live, like the Lovable AI API key for interacting with AI models, and the Supabase Secret Key which bypasses RLS policies entirely. Runtime Flow Now that we understand the security boundaries, we can trace how data moves through the system: The browser uploads an image directly to Storage using the publishable key. The storage policy allows this. The browser invokes the analyze-plant Edge Function The Edge Function calls the Lovable AI Gateway using the LOVABLE_API_KEY . This secret never leaves the server. The result is written to the Database using the secret key. To display the results to the user, the browser reads the plant history directly from the Database The Proof: Finding the Publishable Key If you don’t believe all of the above, you don’t have to take my word for it. Let’s prove it. Let’s open the DevTools tab on Plant Pal and inspect what happens when we navigate to the history page. We see a call to Supabase REST endpoint: Open DevTools Networking to find your publishable key Now inspect the request headers. As suspected, the publishable key is there in plain sight! Although I configured the RLS policies to only allow reads, let me quickly delete the project before this gets published 😉 Up to this point, we mostly looked at how Lovable works. In the next section we will talk about where Lovable fits in an enterprise context. Enterprise Constraints Before diving in, I need to make one important clarification: Lovable is not positioning itself as a full-fledged enterprise application platform (at least not yet). So none of the points below make Lovable “bad”. Instead, they are natural consequences of an architecture optimized for speed and LLM-based iteration. Enterprise software, on the other hand, is optimized for control over change and risk. This means that controlled deployments, testing, stability, compliance, and long term maintainability become much more important than for a prototype. This section explores where those two goals diverge, and as a result, where Lovable fits well, and where it doesn’t. Code Quality and Maintainability While writing this post, I happened to be reading A Philosophy of Software Design by John Ousterhout , and the parallels were hard to ignore. He describes how complexity rarely comes from a single bad decision, but from the accumulation of many small, reasonable shortcuts taken to move fast. Each change works in isolation, but over time the system becomes harder to understand and modify. A similar dynamic can be observed when using Lovable. Lovable evolves applications through a sequence of incremental prompts. Each change is optimized for the immediate request, which introduces a risk of gradually losing sight of the application’s overall structure and intent. Combine that with the limited context window of current LLMs ( context rot ), and the fact that users typically don’t inspect or refactor the underlying codebase, and new changes often end up adding another layer on top of increasingly shaky foundations. Over time, this often leads to a patchwork codebase where teams eventually hit a complexity ceiling. From that point on, new changes require disproportionate effort and often introduce side effects. Lovable excels at implementing functional requirements , the visible capabilities of an app e.g. upload a document or display results. However , e nterprise software is defined just as much by its non-functional requirements: maintainability, reliability, security, scalability and observability. Network Isolation Many enterprises require applications to operate entirely within private networks (VPC/VNet), with strict ingress/egress controls See NIST SP 800-53 Rev. 5, SC-7: Boundary Protection) . By default, all Supabase services are publicly addressable over HTTPS. While Row Level Security controls who can access data, it does not control where that data can be accessed from. For organizations that require network-level isolation,as defined by common enterprise security frameworks, this is often a hard blocker. CI/CD and Environments Enterprise software follows a “build once, deploy many” model, with clearly separated dev, acc, and prod environments. Lovable has no native concept of environments. Each prompt results in a commit that is immediately reflected in the running application. Lovable does offer GitHub sync, so you could build a proper pipeline around the exported code. However, this is not the default workflow, and teams can quickly end up managing a hybrid state between Lovable-driven development and local development. Observability and Cost Transparency Supabase provides logs and basic database metrics through its dashboard, but there is no unified view across the entire stack. This limited visibility also makes it harder to track costs. Both cost of development (credit based system), as well as cost of running the app. The constraints discussed above are not exhaustive. Other enterprise constraints like vendor lock in, long-term ownership, and regulatory compliance are other topics that influence whether Lovable is an appropriate fit for your case. Conclusion In this blog post, we dissected the anatomy of a Lovable app: a standard skeleton using React on the frontend and Supabase primitives on the backend. We explored how the 2-tier like architecture enables rapid development, but pushes security responsibilities to RLS policies that must be carefully reviewed. We also looked at some enterprise constraints like code quality degradation, lack of network isolation and risk of vendor lock in. In Part 2, we will take Plant Pal and migrate it to Azure Cloud (with guidance for AWS and GCP as well) and share tips on setting up your Lovable project from Day 1 to make future migrations easier. “GPU well spent“ — Monstera deliciosa

  • Agent Builders Guide 2026: Managed vs Custom AI Agent Solutions
    Blog

    Agent Builders Guide 2026: Managed vs Custom AI Agent Solutions

    Executive Summary Managed Agent Solutions (MAS) are cloud-managed platforms for building, deploying, and operating AI agents. They remove the need to manually set up infrastructure for orchestration, memory, tooling, tracing, guardrails, and evaluation, allowing teams to reach production faster. MAS have matured from early, UI-focused experiments into platforms that support production workloads across many GenAI use cases. They are not a universal solution. Custom agent solutions are still required when teams need advanced observability, custom evaluation, strict cost control, portability, or complex orchestration.

  • Inside the Claude Agents SDK: Lessons from the AI Engineer Summit
    Blog

    Inside the Claude Agents SDK: Lessons from the AI Engineer Summit

    Executive Summary At the AI Engineer Code Summit in New York City, Anthropic shared key insights into the Claude Agents SDK that reshape how effective AI agents are built in practice. By exposing the same agent harness that powers Claude Code , the SDK highlights a shift away from prompt-centric approaches toward more structured, reliable agent architectures. These learnings reflect a growing challenge many teams are encountering in practice: increasing model capability and code generation speed without losing control, auditability, or reliability. This post distills the core technical takeaways and explains why the infrastructure around the model—the agent harness—is just as critical as the model itself. The full workshop recording from the summit is available on YouTube . In this blog post, we dive into our main learnings.

Jeffrey Hagen 2
Jeffrey HagenHead of DnA

LET’S BUILD YOUR AI VISION.

At ML6, we’ve partnered with the world’s most ambitious enterprises to build their AI vision together. Let's use AI to turn your business strategy into operational impact today.

Jeffrey Hagen 2
Jeffrey HagenHead of DnA