Forecasting weather with BigQuery ML

Weather forecast is a complicated process. If you live in an area with lots of oscillation in weather like us in Melbourne, you should always give some chance for the weather to be different from what you see on websites.

The weather is typically forecasted by first gathering a lot of information about the atmosphere, humidity, wind, etc. and then relying on our atmospheric knowledge and a physical model to articulate changes in the near future. But due to our limited understanding of the physical model and the chaotic nature of the atmosphere, it might be unreliable.

Instead of the common approach for this, here we try to scrutinise the idea of entrusting a machine learning model for this purpose. We expect the model to look at the historical data and get a feeling of how the temperature will change in near future, let’s say tomorrow.

Migrating Blobstore between Projects

What is Blobstore? What is a Blob?

Like horse-drawn carriages, video rental stores, and scurvy, Blobstore is a leftover from an earlier time. It is a storage option on Google Cloud Platform (GCP) that stores objects called blobs and associates each blob with a key. It is used with Google App Engine services and allows applications to serve or get files based on an HTTP connection.

Blobstore is now superseded by Google Cloud Storage (GCS) but its usage is still possible with the actual storage in GCS, the same upload behaviour and minimal changes to the app.

In contrast to other modules in GCP, migration of Blobstore from one project to another is not straightforward. In this blog, we will investigate this migration.

Using gcloud Formats and Projections in the Google Cloud Platform

Recently, I was hunting around the internet, looking for an easy way to extract an attribute of GCP resource to cross-reference while creating another resource in gcloud. I had reserved a static IP address and I wanted to use it’s IP address as the external address of a VM instance. I learnt that such a simple operation was indeed tricky, at least up until some time ago. Here’s my journey and welcome aboard!

Introducing column based partitioning in BigQuery

Some background

When we started using Google BigQuery – almost five years ago now – it didn’t have any partitioning functionality built into it.  Heck, queries cost $20 p/TB back then too for goodness’ sake!  To compensate for this lack of functionality and to save costs, we had to manually shard our tables using the well known _YYYYMMDD suffix pattern just like everyone else.  This works fine, but it’s quite cumbersome, has some hard limits, and your SQL can quickly becomes unruly.

Then about a year ago, the BigQuery team released ingestion time partitioning.  This allowed users to partition tables based on the load/arrival time of the data, or by explicitly stating the partition to load the data into (using the $ syntax).  By using the _PARTITIONTIME pseudo-column, users were more easily able to craft their SQL, and save costs by only addressing the necessary partition(s).  It was a major milestone for the BigQuery engineering team, and we were quick to adopt it into our data pipelines.  We rejoiced and gave each other a lot of high-fives.

Google Cloud Community Conference 2018

As a co-organizer for GDG Cloud Melbourne, I was recently invited to the Google Cloud Developer Community conference in Sunnyvale, California. It covered meetup organization strategies and product roadmaps, and was also a great opportunity to network with fellow organizers and Google Developer Experts (GDEs) from around the world.  Attending were 68 community organizers, 50 GDEs and 9 open source contributors from a total of 37 countries.

I would have to say it was the most social conference I have ever attended. There were a lot of opportunities to meet with people from a wide range of backgrounds. I also got many valuable insights into how I could better run our meetup and better make use of Google products. In this post I’ll talk about what we got up to over the two days.

Using Google Cloud AutoML to classify poisonous Australian spiders

Warning: This post contains pictures of spiders (and Spiderman)!

Google’s new Cloud AutoML Vision is a new machine learning service from Google Cloud that aims to make state of the art machine learning techniques accessible to non-machine learning experts. In this post I will show you how I was able, in just a few hours, to create a custom image classifier that is able to distinguish between different types of poisonous Australian spiders. I didn’t have any data when I started and it only required a very basic understanding of machine learning related concepts. I could probably show my Mum how to do it!

Getting ya music recommendation groove on with Google Cloud Platform! Part 3

In parts 1 and 2 of this blog series, we’ve seen how to implement an item-similarity model in TensorFlow, and the intuition behind various recommender models. It’s now time to have a high-level view of a recommendation project in the Google Cloud Platform. This will encompass all of our plumbing for the web service, so that it can be up and available on the web. I will outline two possible architectures – one where we deploy and manage TensorFlow ourselves using the Google Kubernetes Engine (GKE) , and the other using the fully-managed Cloud Machine Learning Engine (MLE).  You’ll also find how to communicate with the ML engine modules, and how to configure your computational clusters.

Trams, Shiners and Googlers!

Shine’s good friend Felipe Hoffa from Google was in Melbourne recently, and he took the time to catch up with our resident Google Developer Expert, Graham Polley. But, instead of just sitting down over a boring old coffee, they decided to take an iconic tram ride around the city. To make it even more interesting, they tested out some awesome Google Cloud technologies by using their phones to spin up a Cloud Dataflow cluster of 50 VMs, and process over 10 billion records of data in under 10 minutes! Check out the video they recorded:

Getting ya music recommendation groove on with Google Cloud Platform! Part 2

pexels-photo-258291.jpeg

In part 1, we learnt about recommendation engines in general, and looked at ways to implement a service using the Google Cloud Platform (GCP). In part 2 of the blog series, we are getting our hands dirty on the item-similarity model and TensorFlow implementation of it.

This is our first technical blog of the series. Here, I deep dive into the data processing step, the recommendation service, and some hints on how to optimise the code to have real-time responses. You should expect to know how to build a simple item-similarity recommender engine by the end of this blog.

So let’s get the party started!