The best code is no code! Using Google Cloud’s new automated services.

Here in Australia, we do a lot of work on Google Cloud Platform for one of the country’s largest ISPs, Telstra. Most of that work involves building data pipelines and running analytics off the back of them for their Media business unit. As you can well imagine, they generate a huge amount of data on a daily basis. We use tools like BigQuery, Cloud Dataflow and Data Studio to wrangle, manage, and understand that data.

On one such project for Telstra, we saw an opportunity to delete three code repositories and finally rid ourselves of some of the headaches associated with maintaining those applications, all the while saving money on the operational costs.

We were able to replace the system comprising these repos with two new Google Cloud Platform services:

In this blog post, I’ll introduce you to those new services that Google have spun up, and how we were able to use them to replace our legacy applications. Who doesn’t like a good spring clean, huh?

Scheduling BigQuery jobs: this time using Cloud Storage & Cloud Functions

Intro

Post update: My good friend Lak over at Google has come up with a fifth option! He suggests using Cloud Dataprep to achieve the same. You can read his blog post about that over here. I had thought about using Dataprep, but because it actually spins up a Dataflow job under-the-hood, I decided to omit it from my list. That’s because it will take a lot longer to run (the cluster needs to spin up and it issues export and import commands to BigQuery), rather than issuing a query job directly to the BigQuery API. Also, there are extra costs involved with this approach (the query itself, the Dataflow job, and a Dataprep surcharge – ouch!). But, as Lak pointed out, this would be a good solution if you want to transform your data, instead of issuing a pure SQL request. However, I’d argue that can be done directly in SQL too 😉

Not so long ago, I wrote a blog post about how you can use Google Apps Script to schedule BigQuery jobs. You can find that post right here. Go have a read of it now. I promise you’ll enjoy it. The post got quite a bit of attention, and I was actually surprised that people actually take the time out to read my drivel.

It’s clear that BigQuery’s popularity is growing fast. I’m seeing more content popping up in my feeds than ever before (mostly from me because that’s all I really blog about). However, as awesome as BigQuery is, one glaring gap in its arsenal of weapons is the lack of a built-in job scheduler, or an easy way to do it outside of BigQuery.

That said however, I’m pretty sure that the boffins over in Googley-woogley-world are currently working on remedying that – by either adding schedulers to Cloud Functions, or by baking something directly into the BigQuery API itself. Or maybe both? Who knows!

Fun with Serializable Functions and Dynamic Destinations in Cloud Dataflow

Taumata_Racer.jpg
Waterslide analogy. One input, multiple outputs. Each slide represents a date partition in one table.

Do you have some data that needs to be fed into BigQuery but the output must be split between multiple destination tables? Using a Cloud Dataflow pipeline, you could define some side outputs for each destination table you need, but what happens when you want to write to date partitions in a table and you’re not sure what partitions you need to write to in advance? It gets a little messy. That was the problem I encountered, but we have a solution.

My favourite talks from YOW! 2017 Melbourne

No food reviews here I’m afraid

This year I was incredibly lucky to score a coveted ticket to YOW! in beautiful Melbourne. I was also asked to be a track host for a couple of sessions, so that was quite an honour too. This post is a whirlwind wrap-up of the conference, and only includes my favourite talks from the two day event. If you’re hoping to hear detailed reviews on how the coffee/food/WiFi/venue was, then you’ll be greatly disappointed (it was all great BTW).

re:Invent 2017: Day 2

The last time I was fortunate enough to attend AWS’s global conference, re:Invent, was three years ago in 2014. Then there were 14,000 delegates and the conference spanned just two Las Vegas hotels. Lambda was announced during Werner Vogels’ keynote and it seemed that the most in-demand sessions had “Docker” in the title.

In just three years the conference has tripled in size with 43,000 delegates attending this year spread across a campus of six Las Vegas hotels. Although not one of the biggest conferences held in Vegas, it’s obviously a significant logistical challenge. After some hiccups on the first day with the inter-venue shuttles and a venue running out of food, everything seemed to settle down and run smoothly from the start of the second day. Whether the improvement was due to human learnings of the hivemind or training of some Machine Learning algorithms is up for debate but almost certainly it was a combination of the two. No, actually, the transport still is not good and Uber is key to success.

re:Invent 2017: Day 1

What happens In Vegas….

The old adage tells us that what happens in Vegas, stays in Vegas. But for one week a year the reverse becomes true. Thousands of cloud enthusiasts descend on the city of sin and come away filled with renewed vigour to play with, and ultimately implement, the latest toys from Amazon Web Services.

This year I’ve been lucky enough to represent Shine by travelling to Las Vegas and participating in this prestigious event. In this post I’ll be recapping some of the things I’ve seen. I’ll add more as the week goes on with my thoughts and reflections, as well as the latest announcements from AWS.

Scheduling BigQuery jobs using Google Apps Script

Do you recoil in horror at the thought of running yet another mundane SQL script just so a table is automatically rebuilt for you each day in BigQuery? Can you barely remember your name first thing in the morning, let alone remember to click “Run Query” so that your boss gets the latest data refreshed in his fancy Data Studio charts, and then takes all the credit for your hard work?

Well, fear not my fellow BigQuery’ians. There’s a solution to this madness.

It’s simple.

It’s quick.

Yes, it’s Google Apps Script to the rescue.

Disclaimer: all credit for this goes to the one and only Felipe Hoffa. He ‘da man!

BigQuery & new users – the top “WTF!?” moments

“What the Fudge?”

I use Google BigQuery a lot. On a daily basis I run dozens of queries, use it to build massively scalable data pipelines for our clients, and regularly help new users navigating it for the first time. Suffice it to say I’m somewhat accustomed to its little quirks. Unfortunately, the same can’t be said for the new users who are commonly left scratching their heads, and shouting “What the fudge!?” at their monitors.

Here’s the top three WTFs that I regularly hear from new BigQuery users:

A post Google I/O 2017 conversation with Google Home

OK Google, generate a clickbait title for my Google I/O 2017 blog post

I’ve generated a title, Gareth. What would you like to add next?

OK Google, I’m a bit jet lagged – remind me what I saw at Google I/O 2017

I would love to help, Gareth, but I’m going to need a little more information. Would you like that information in chronological order, or grouped by topic?

Beam me up Google – porting your Dataflow applications to 2.x

Will this post interest me?

If you use (or intend to use) Google Cloud Dataflow, you’ve heard about Apache Beam, or if you’re simply bored in work today and looking to waste some time, then yes, please do read on. This short post will cover why our team finally took the plunge to start porting some of Dataflow applications (using the 1.x Java SDKs) to the new Apache Beam model (2.x Java SDK). Spoiler – it has something to do with this. It will also highlight the biggest changes we needed to make when making the switch (pretty much just fix some compile errors).