Back to overview
Blog

From Tools to Foundations: Building the AI Operating Layer for the Modern Enterprise

Read on
Molly Batrouney

Molly Batrouney

Squad Lead
Read on
Updated
24 Mar 2026
Published
24 Mar 2026
Reading time
9 min
Tags
 From Tools to Foundations: Building the AI Operating Layer for the Modern Enterprise
Share this on:
From Tools to Foundations: Building the AI Operating Layer for the Modern Enterprise
12:02

Executive Summary

Organizations typically begin their AI journey with SaaS-based AI solutions. While effective for experimentation and early ROI validation, stacking multiple off-the-shelf AI tools often leads to fragmented data, governance complexity, and integration overhead.

Custom AI solutions built on foundation models enable centralized intelligence, scalable multi-agent systems, and continuous model improvement through Data Flywheels. The shift from SaaS to infrastructure is not about abandoning experimentation — it is about architecting AI as a strategic operating layer.

SaaS is how companies start with AI. Custom AI infrastructure is how they scale it. For most enterprises, a hybrid solution of SaaS product and custom builds is the best solution.

To SaaS or not to SaaS: When is a custom AI solution better?

Most AI roadmaps look like someone opened the SaaS marketplace and clicked “Add to Cart” eight times.
Chatbot Voice Agent Forecasting Tool CRM Copilot Internal GPT

Each one promising intelligence.

Together? A fragmented stack that looks impressive on paper — but struggles to operate as one system.

From AI Experimentation to AI Fragmentation

A customer approached us a couple of months ago wanting to explore how they can and should be using AI. They were ambitious with their business strategy, they asked how we can leverage a multi-agent system to automate their Sales to Cash workflow. However, their technology strategy looked like it had been written by ChatGPT, listing out eight different AI products that needed to be able to work together.

Most companies start their AI journey with SaaS products - and that’s a good thing. They’re fast to deploy, great for experimentation, and low commitment. You can validate use cases quickly and prove early ROI without heavy infrastructure investment.

But SaaS is a starting point. It is not an AI operating strategy.

According to McKinsey’s 2025 State of AI survey, 88% of organizations now report regular use of AI in at least one business function, yet only about one-third have begun scaling AI programs across the enterprise. Even more telling, just 39% report any measurable enterprise-level EBIT impact from AI — typically below 5% of total EBIT.

Adoption is widespread. Scaled impact remains limited.

At some point, experimentation turns into accumulation. Tools multiply. Workflows overlap. Data fragments. And what began as agility quietly becomes complexity.

Limitations of AI SaaS

As you scale, these tools start to get in each other’s way. Each one works well in isolation, but together they create intelligence silos. Your data is fragmented, your team is context-switching between dashboards, and decisions become slower - not faster.

Architecture diagram illustrating business units connected to AI-enabled tools such as Voiceflow, Zapier, and Salesforce Einstein, with limited integration between systems and shared data sources.

Fragmented AI Tooling Across the Enterprise

Many off-the-shelf AI solutions are designed to solve a narrow use case: customer support automation, sentiment analysis, content creation, or workflow automation. While powerful in isolation, they rarely share a unified AI platform or consistent data governance framework.

This creates friction across data sources, proprietary data environments, API connections, and database schemas. Instead of enabling systems thinking, organizations end up stitching together external tools through fragile integrations.

Over time, this limits how generative AI, large language models, and AI agents can reason across customer behavior patterns, inventory management systems, personalized marketing workflows, or human resources processes.

A common pattern is for customers to use an AI product for a couple of months, recognize the benefits of AI, but then encounter limitations in extending the solution to new use cases. For example, they created a voice agent to talk to customers and have already seen an ROI on that voice agent so want to start extending it to solve other problems like answering personalised invoicing questions, however, their current product setup can’t solve their other use cases, so they now need to pivot. This is where we can help unlock the next set of use cases by adjusting the AI solution to fit broader business needs, while keeping the approach focused and efficient.

This is where custom AI solutions unlock scale. Instead of layering additional SaaS subscriptions, organizations can build a centralized AI foundation that acts as a unified intelligence layer across the business. By leveraging large language models, Retrieval Augmented Generation (RAG), natural language processing, and even computer vision capabilities, enterprises create an extensible AI platform that adapts as new use cases emerge.

How Custom AI Infrastructure Works in Practice

Building a custom solution for your business means you can centralise all your intelligence, your agents can seamlessly talk together, and it is easy to adapt your system as new AI techniques or as your business changes. It can be built on top of your existing cloud stack, leveraging whatever relationship you currently have with the cloud providers. Foundation models are constantly getting better, meaning your system is always getting better, but then we can finetune that model to any specific use case you need.

Custom AI solutions are typically built on top of AI foundation models from providers such as OpenAI, Microsoft Azure, AWS, Google Cloud, or NVIDIA AI Enterprise. By leveraging multimodal capacity and general-purpose AI models, enterprises can fine-tune custom models tailored to their specific workflows, proprietary data, and operational requirements.

Unlike off-the-shelf solutions, this approach allows organizations to integrate structured and unstructured data sources - from CRM systems and ERP platforms to web data, internal documentation, and customer interaction logs - into a unified intelligence layer. This ensures that AI agents operate with full business context rather than isolated datasets.

Modern AI platforms enable multi-agent systems using frameworks such as ReAct, A2A protocol architectures, or Agent Development Kits. These frameworks allow agents to reason, plan, retrieve information through Retrieval Augmented Generation (RAG), and coordinate actions across systems via secure API connections. Instead of stitching together disconnected tools, enterprises create an extensible AI infrastructure that evolves as new use cases emerge.

This architecture also enables Data Flywheels - where continuous feedback loops improve model development over time. As more data-driven decisions are made, models refine themselves, workflows become more intelligent, and the overall AI system compounds in value rather than stagnating. Over time, AI shifts from being a feature embedded in tools to becoming a core operating capability across the organization.

For example, we have been working with a large energy provider that had multiple different AI use cases they wanted to tackle. They wanted an internal GPT, external facing chatbot for the website and invoice and payment management discussions with customers using voice agents. After building the initial infrastructure to support an agentic framework, it is easy to spin up different agents that are custom to their business needs benefitting from economies of scale.

Cross-Functional AI Impact

When the underlying AI infrastructure is in place, expansion becomes significantly easier. Instead of launching disconnected tools department by department, organizations can extend the same intelligence layer into new functions with far less friction.

The value is not in deploying another agent. It is in reusing the same foundation.

Once your observability, governance controls, and model orchestration frameworks are centralized, new use cases become modular extensions rather than standalone projects. The same intelligence layer that powers a customer-facing chatbot can be adapted for inventory optimization in retail, personalized marketing analytics, HR automation, or workflow support in application development. Each department benefits from shared context, consistent governance, and continuous model improvement.

This is where ownership matters. When intelligence is built on your infrastructure and connected to your proprietary data, you are not dependent on the constraints of a single product roadmap. You can adapt capabilities to each department’s environment while maintaining a unified architectural core.

This is the transition point where AI moves from isolated automation to enterprise-wide intelligence - not because more tools are added, but because the underlying system was designed to extend.

 

Enterprise AI operating layer architecture with agentic workforce, generative AI, and machine learning connecting business units to internal data and knowledge systems.

AI Operating Layer with an Agentic Workforce

But What About a Hybrid Approach?

If SaaS is the starting point and custom infrastructure unlocks scale, then where does that leave most enterprises? In reality, it leaves them somewhere in between.

Platform-native AI capabilities remain valuable for workflows that are deeply embedded within a single system. If a use case lives entirely inside CRM, ERP, or ITSM, leveraging the native agent may be the most efficient choice. There is no need to rebuild what is already optimized.

However, the moment intelligence needs to span systems, coordinate across departments, or solve a specific problem to your business, relying solely on SaaS becomes limiting. That is where centralized AI infrastructure becomes critical.

Hybrid, therefore, is not about mixing tools randomly. It is about defining a clear boundary: use SaaS for contained efficiency, and use custom infrastructure for cross-functional intelligence. The architecture - not the vendor - determines where each belongs.

Buy vs Build: A Practical Decision Framework

The decision to buy AI SaaS or build a custom AI solution for your use case is not purely technical. It is strategic. It shapes cost structure, governance, scalability, and competitive differentiation.

Below is a simplified framework to guide that decision.

 

AI SaaS

Custom AI solution leveraging foundation models

Technologies

Agentforce, Cognigy, Kore.AI etc.

OpenAI, Google Cloud, AWS, Azure etc.

When it makes sense

Your problems are well defined, common and isolated.

You need scalability or cost efficiency at scale.

You want a competitive differentiator.

You want to be able to adapt quickly.

Pros

  • Usually configurable without any code
  • Usually existing in your current products
  • Plug and play
  • Centralised intelligence - easy for your agents and data to talk to each other
  • Easy to adapt as AI or your needs change
  • Tailored to your enterprise
  • Competitive Edge - build to your operations

Cons

  • Long term more expensive due to high ongoing subscription costs
  • Less customability and flexibility
  • Short term more expensive due to high implementation costs

 

Conclusion: Building the AI-Enabled Operating Layer

For the modern enterprise, the answer is Hybrid. While SaaS provides the speed to start, custom infrastructure provides the power to scale. To maintain a competitive edge, organizations should consider:

  • Adopting a Hybrid Strategy: Balance platform-native agents (e.g. Agentforce) with custom agents for cross-platform workflows. Ensure interoperability from day one to prevent the next generation of data silos.
  • Prioritizing ROI with a Phased Approach: Focus on pilots with tangible outcomes. Use early wins to build momentum, while anchoring efforts to a long-term roadmap for broader adoption.
  • Invest in Orchestration and Governance: Facilitate seamless cross-agent collaboration through a unified orchestration layer. By centralizing security, monitoring, and extensibility, organizations can standardize agent approvals and compliance, creating a scalable framework that evolves with the business.

About the author

Molly Batrouney

Molly is a Squad Lead at ML6, focused on making AI work beyond the proof of concept. For years, she has been designing and leading process transformations in complex technical environments. Today, she focuses on automating and elevating those very processes using AI - helping organizations move from ambition to execution in practical, measurable ways. With a passion for responsible innovation, Molly brings structure to complexity and momentum to bold ideas.

The answers you've been looking for

Frequently asked questions