Deep Learning in Production with TFX
March 30, 2021

Deep Learning in Production with TFX


The rise of artificial intelligence has become omnipresent in recent years, state-of-the-art models are open-sourced on a daily basis and companies are fighting for the best data scientists and machine learning engineers, all with one goal in mind: creating tremendous value by leveraging the power of AI. Sounds great, but reality is harsh. Models, in general, don’t make it to production, and even if they do, inconsistencies along the way prevent your model from doing that one thing it was supposed to do: generate value. Let’s solve that. Once and for all. Unleash the power of Tensorflow Extended by building a production-ready sentiment analysis model!

One phrase that is often coined in the field of AI is: ‘Garbage in, Garbage out’. This phrase is the embodiment of a key task in every deep learning project and which is called feature engineering. This process comes with many challenges which we will briefly discuss in this article, and more importantly, how Tensorflow Transform can help to overcome them. In ‘Deep Learning in Production with TFX (Part 1)’ we saw how to ingest data, generate statistics and validate our data. Now that our data is validated, it is time to move on to the next step. Get ready for one of the most exciting components of the TFX ecosystem: Tensorflow Transform. Can’t wait to try this tutorial out yourself? → colab).

The added value of Tensorflow Transform

Before going into detail about how to use TF Transform, it is important to understand what Tensorflow Transform is and what challenges it tries to address. In particular, we’ll be focusing on challenges related to:

  • Generating features which are based on the entire dataset
  • Training/Serving skew

A Visual Exploration of Tensorflow Transform

Figure 1 provides a high-level overview of how Tensorflow Transform is positioned with respect to the training and serving stages. It depends heavily on apache beam under the hood for processing massive amounts of data. Let’s have a closer look. Image we have a dataset of raw inputs (images, text, structured data, you name it). These inputs are analysed using a full pass over the entire dataset and transformed based on these results. For example, if we want to normalise the dataset, we first get the min and max value of the entire dataset (analyse) and subsequently transform the raw inputs using the values obtained by the analysis step (transform). Then, we serialise the transformation graph, such that we can include it as part of the serving model.

Figure 1: A visual representation of the inner workings of Tensorflow Transform

Problem: Converting raw inputs into features requires dataset level knowledge.

Consider the scenario where you‘re working with unstructured data and you want to predict the sentiment of a particular review or tweet. In fact, that will be what we are working with. As machine learning models can’t work directly with text, it is important to tokenize the raw inputs (i.e. words). However, there is one slight problem. Tokenization requires a mapping between words and indices and hence, you need to construct a vocabulary of the entire (training)dataset. Tensorflow Transform simplifies this process by splitting it into two distinct parts: Analyse and Transform. During the Analyse step, a full pass over the dataset collects the right information (in our case, the entire vocabulary), while the second step transforms the raw inputs into features. The same procedure can be used for normalisation, standardisation and many more.

Problem: Performance drop when productionizing ML model due to training/serving skew

Your data science team has been working extremely hard for several months and trained a new state-of-the-art model to classify sentiment from movie reviews. They went through the entire cycle of exploratory data analysis, feature engineering, training, hyperparameter tuning and even did some testing to ensure that their model would be no threat to humanity. Finally, they are ready to deploy it. But after deploying it, they suddenly see that the predictions don’t make any sense. What could be wrong? One potential case is that the raw inputs were not processed in exactly the same way as had been done during training. Consequently, the results turn out to absolute garbage. One could say, just make sure that your preprocessing matches the one during training, but this can be notoriously difficult. Tensorflow Transform takes away the need to preprocess inputs before sending it to the model by enabling you to export your preprocessing steps as part of your model (isn’t that just wonderful?!).

Tensorflow Transform Hands-on

Great. We have seen some of the challenges of generating features (during training and inference) and how Tensorflow Transform can act as an enabler to overcome them. Time to get our hands dirty in the following two sections: TF Transform using TFX Transform Component and TF Transform with Apache Beam.

  • TF Transform using TFX Transform Component: Check out our Medium blogpost for the full hands-on explanation.
  • TF Transform with Apache Beam : Check out our Medium blogpost for the full hands-on explanation.


We discussed some of the challenges related to feature engineering and model serving, we saw how to use the TFX Transform component as part of our pipeline and finally we implemented our own Apache Beam pipeline in which we analysed and transformed our data. We know now that the real added value of Tensorflow Extended lies in building a production-ready sentiment analysis model.

Are you interested in Tensorflow Extended? Kubeflow? Beam? Contact our Machine learning engineers of the ML in production chapter!

Related posts

Want to learn more?

Let’s have a chat.
Contact us