The rise of artificial intelligence has become omnipresent in recent years. In reality we see that only a small percentage of models makes it to production and stays so. In a series of blog posts on MLOps, we explain why and how companies can adopt MLOps practices to unlock the business value of AI.
You present a new exciting proof of concept backed by a brand new, state of the art, in-house developed ML model to your management. They are beyond excited! Instead of that expensive team that talks in weird mathematics, you are the rockstars of the company. They start talking about putting the POC in production as soon as possible. Management talk for: “It should have been integrated yesterday”. On top of that, you need to expand your team and develop even more proof of concepts for other business domains.
Be careful… if you say “Yes” and don’t yet have an MLOps strategy backed by the right team and budget, you might quickly have a tough row to hoe.
Let’s look into a few pitfalls that we’ve encountered in our MLOps advisory services and ML in production projects if you scale AI without the right team, budget and MLOps.
First of all, make sure you work with a multidisciplinary team. Involve all the stakeholders that will be affected by AI. This includes technical stakeholders such as security, IT architecture, data engineering and integration/application teams.
The input will be valuable to avoid data and integration issues, and you will get enough priority in the change request planning.
Additionally, you need to involve the actual users, customers and teams responsible for the business process and compliance. Users trust, and often become the best ambassadors for AI, if they are genuinely involved and understand it’s not a robot or black box that will replace all their work.
Some of our customers, by default work with a flexible multidisciplinary team for every proof of concept. Other customers start involving more stakeholders as soon as the proof of concept seems feasible from an ML perspective.
From a legal perspective, AI must be implemented in a safe, consistent and ethical way.
If you are having trouble securing budget, mention to your manager that MLOps will ensure legal compliance by:
MLOps frameworks and the existing platform monitoring solutions include tools to do this at scale.
Legal requirements are easier to get approved than technical requirements that are harder to understand for the business. A good example is GDPR. You need to respond to any data request by a user in a number of days, including explaining why an ML model returned a specific result.
On top of reducing risk, you can also explain that the efficiency of your team will increase by introducing MLOps principles. MLOps tools offer plenty of functionality to split an ML pipeline into reusable components. These components can be shared across the data science team.
Working with known components and adding the right amount of validations into the pipeline before it’s released to production also contributes to psychological safety. People won’t be afraid to experiment and release code into the wild (aka production). Check “Resilience Engineering and DevOps – A Deeper Dive” for more information about this critical benefit.
We hope you’ve provided you with some conversation starters to “sell” MLOps in your organisation. Do remember that the goal of AI, and any other IT project, is generating value for your organization, so be careful not to over-engineer MLOps and lose track of this goal.
This post is part of a series of blog posts on the topic of MLOps. In this series, we explain why and how companies can adopt MLOps practices to unlock the business value of AI. Find the other content here.