Why Google just released Vertex AI and what it means for you
May 21, 2021

Why Google just released Vertex AI and what it means for you

ML in Production

The last couple of years, companies have been embracing artificial intelligence more and more in their day-to-day workflow. Over time, the landscape of AI has changed dramatically. First we saw scientists trying to create complex networks that are able to perform tasks on a super-human level (but nobody understood what they were doing). Then came the early adopters who hired machine learning engineers to build tailor-made models that allowed them to create new and better products. In the modern day landscape, AI has become an established part of the software industry and is not only being used to set up proof of concepts that are hard to control anymore, but fully featured in production environments, able to scale and maintain with the use of MLOps best practices. If you want to read more about MLOps, have a look at one of our earlier blog posts where we give an introduction and overview.

Vertex AI Logo

This week at Google IO, Google announced the general availability of Vertex AI. A new platform that leverages the MLOps principles and unifies a lot of the already available tools offered by GCP. In this blog post we’ll dive a bit deeper into why Google has just announced one of their biggest releases in the last couple of months (or even years), what it offers and what the consequences will be. In the next couple of weeks we will release a series of blog posts that go more in depth on the tools that are featured by Vertex AI.

A bit of history

Being one of the drivers in the world of machine learning, Google offers a plethora of tools such as AutoML, AI Platform, BigQuery ML, etc. Some of these solutions can feel a bit scattered and it takes experience to know how everything should be connected in a coherent fashion. Back in November 2020, Google announced “AI Platform (unified)” with a small announcement in their release notes (see Figure 1).

Figure 1: Release note Google
Figure 1: Release note AI Platform (unified) from Google.

AI Platform (unified) well… unified AI Platform and AutoML. This was a first step in trying to differentiate between calling a pre-trained model that is offered by Google vs training a model and customizing it to your data. Currently, these two are still mixed together. For example, within the Natural Language AI product we have the Cloud Natural Language API (that offers general content classification; sentiment analysis; entity recognition on a fixed set of labels) and AutoML for text and document classification, sentiment analysis and entity extraction separately. With AI Platform (unified) the AutoML products (vision, NLP and tables) were now part of the AI Platform product, where several tools for preparing and training your data were combined together.

Google was always a bit vague on the difference between AI Platform and AI Platform (unified) and little birds whispered in our ears the last couple of months that Google was working on something big. Starting from this week, AI Platform (unified) is nowhere to be found and replaced with Vertex AI.

If you try to find documentation on AI Platform tools, you’ll often be redirected to Vertex AI documentation. Try clicking the following link: https://cloud.google.com/ai-platform/

Other AI Platform documentation pages (such as this one: https://cloud.google.com/ai-platform/docs) clearly illustrate that AI Platform will be fully replaced with Vertex AI in the future (see Figure 2).

Figure 2: AI Platform documentation referring to Vertex AI.

This announcement is a big step for GCP and now we know why there seemed to be a veil of mystery around AI Platform (unified).

What does Vertex AI have to offer?

In short: ease of use. Vertex AI looks to be the platform Google is offering to bundle the tools to get machine learning into production following the MLOps principles. According to Google:

“Vertex AI requires nearly 80% fewer lines of code to train a model versus competitive platforms, enabling data scientists and ML engineers across all levels of expertise the ability to implement Machine Learning Operations (MLOps) to efficiently build and manage ML projects throughout the entire development life cycle. “

Vertex AI will become the one-stop-shop for everything AI related to GCP. We’ll give some short explanations about the tools Vertex AI will feature and what they are. In-depth blog posts will follow in the next coming weeks.

Figure 3: overview Vertex AI (source: TechCrunch).

As from this week, Vertex AI is opening up a lot of tools from its platform:

  • Preparing and managing datasets for usage with AutoML.
  • AutoML to train machine learning models on your dataset.
  • Training custom models such as Tensorflow, Scikit-learn, PyTorch and XGBoost models.
  • Predictions (both online and batch) from AutoML models and custom-trained models.
  • Explainable AI to help understand model’s outputs for classification and regression tasks to non-technical users. Helping them to understand what the limitations of a machine learning model are and where there can be potential (unfair) biases.
  • Forecasting with time series data for AutoML tabular models.*
  • Feature Store will provide a centralized repository for organizing, storing and serving ML features. This fixes the problems of underutilization of high-quality features and redundancy in feature creation. A centralized repository provides clear ownership of where features are created, are stored and should be called from. Feature stores offer low-latency online requests and high-throughput batch requests.*
  • Tensorboard to easily compare the training of ML experiments, built on the open source library Tensorboard by Google’s Tensorflow team.*
  • ML Metadata which is built on the open source ML Metadata library that was developed by Google’s Tensorflow Extended team. Gathering metadata of model pipelines is an essential tool in documentation and transparency of your machine learning models.*
  • Matching Engine to enable efficient vector similarity searches based on Anisotropic Vector Quantization.*
  • Vizier to tune hyperparameters in complex machine learning models for rapid experimentation as a black-box optimization service.*
  • Vertex pipelines will replace current AI Platform Pipelines to automate, monitor and govern your ML systems with an orchestrator. You will be able to run pipelines that were built on both the Kubeflow Pipelines SDK and Tensorflow Extended.*
  • Edge Manager to seamlessly deploy and monitor edge inferences (more information coming soon).
  • Neural Architecture Search to find the optimal architecture for your machine learning model application (more information coming soon).

*in preview

With these tools, Google wants to double down on accelerating time to value by better managing models end-to-end with MLOps principles and having a central location for everything AI related. Aside from that, they want to cater to all levels of expertise aside from the engineers who know the ins and outs of GCP. Readily available ML APIs and AutoML are well-organized and allow for a low barrier for companies to experiment with machine learning. Another large focus is on Explainable AI to lower the threshold to understand how the models work and what their limitations are.

The next couple of months we expect to see more roll outs from Google becoming either generally available or in preview.

What does this mean for you?

All of this means implementing MLOps is easier and more accessible to machine learning teams. It means you don’t have to know GCP by heart to find all the MLOps tooling scattered around in the GCP Console. It’s a quality of life improvement for data science teams as well as adding some new and powerful tools to the newly unified arsenal.

Many of their services are fully managed but built on open source frameworks (Vertex pipelines are built on top of Kubeflow pipelines and Tensorflow Extended, Vertex Metadata is built on ML Metadata, etc.) making the leap more of a small hurdle instead of having to jump a 6 meter high pole vault. There is a high degree of cross communication between each part of Vertex and while that is exactly what makes it so useful for data science teams, it’s also not easily ported to another platform. However, it is too early for conclusions. Therefore, at ML6 we’ll be taking a closer look at these tools and the newly unified Vertex platform as a whole in future blog posts. That way we can see for ourselves just where this portability/lock-in trade-off lies with this new GCP product.

On a more practical note, users of the current GCP ML stack will have a migration on their hands soon. From the Vertex docs:

Vertex AI supports all features and models available in AutoML and AI Platform. However, the client libraries do not support client integration backward compatibility. In other words, you must plan to migrate your resources to benefit from Vertex AI features.

As experienced ML engineers we can add that you have to actually migrate too, so maybe leave the planning to your PM and do the migration yourself.

All jokes aside, one can find the migration instructions here. While we haven’t tested these out yet, it seems to be pretty straightforward and Google has made a migration tool to help you do this. The main differences are reviewing new pricing, new IAM roles etc. The migration process is a copy operation, so all resources should continue working in the old way and you can shut them down once you know for sure everything works with the new Vertex API.

Related posts

Want to learn more?

Let’s have a chat.
Contact us
No items found.