In parts 1 and 2 of this blog series, we’ve seen how to implement an item-similarity model in TensorFlow, and the intuition behind various recommender models. It’s now time to have a high-level view of a recommendation project in the Google Cloud Platform. This will encompass all of our plumbing for the web service, so that it can be up and available on the web. I will outline two possible architectures – one where we deploy and manage TensorFlow ourselves using the Google Kubernetes Engine (GKE) , and the other using the fully-managed Cloud Machine Learning Engine (MLE). You’ll also find how to communicate with the ML engine modules, and how to configure your computational clusters.