BigQuery Tag

Some background

When we started using Google BigQuery - almost five years ago now - it didn't have any partitioning functionality built into it.  Heck, queries cost $20 p/TB back then too for goodness' sake!  To compensate for this lack of functionality and to save costs, we had to manually shard our tables using the well known _YYYYMMDD suffix pattern just like everyone else.  This works fine, but it's quite cumbersome, has some hard limits, and your SQL can quickly becomes unruly. Then about a year ago, the BigQuery team released ingestion time partitioning.  This allowed users to partition tables based on the load/arrival time of the data, or by explicitly stating the partition to load the data into (using the $ syntax).  By using the _PARTITIONTIME pseudo-column, users were more easily able to craft their SQL, and save costs by only addressing the necessary partition(s).  It was a major milestone for the BigQuery engineering team, and we were quick to adopt it into our data pipelines.  We rejoiced and gave each other a lot of high-fives.
Shine’s TEL group was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader in the community through blogs, local meet up talks, and conference presentations. Every now and then (it started off as being monthly, but that was too much work), we curate all the noteworthy things that Shiners have been up to, and publish a newsletter. Read on for this month's edition.

Shine's good friend Felipe Hoffa from Google was in Melbourne recently, and he took the time to catch up with our resident Google Developer Expert, Graham Polley. But, instead of just sitting down over a boring old coffee, they decided to take an iconic tram ride...

Here in Australia, we do a lot of work on Google Cloud Platform for one of the country’s largest ISPs, Telstra. Most of that work involves building data pipelines and running analytics off the back of them for their Media business unit. As you can well imagine, they generate a huge amount of data on a daily basis. We use tools like BigQuery, Cloud Dataflow and Data Studio to wrangle, manage, and understand that data. On one such project for Telstra, we saw an opportunity to delete three code repositories and finally rid ourselves of some of the headaches associated with maintaining those applications, all the while saving money on the operational costs. We were able to replace the system comprising these repos with two new Google Cloud Platform services: In this blog post, I’ll introduce you to those new services that Google have spun up, and how we were able to use them to replace our legacy applications. Who doesn’t like a good spring clean, huh?

Intro

Post update: My good friend Lak over at Google has come up with a fifth option! He suggests using Cloud Dataprep to achieve the same. You can read his blog post about that over here. I had thought about using Dataprep, but because it actually spins up a Dataflow job under-the-hood, I decided to omit it from my list. That's because it will take a lot longer to run (the cluster needs to spin up and it issues export and import commands to BigQuery), rather than issuing a query job directly to the BigQuery API. Also, there are extra costs involved with this approach (the query itself, the Dataflow job, and a Dataprep surcharge - ouch!). But, as Lak pointed out, this would be a good solution if you want to transform your data, instead of issuing a pure SQL request. However, I'd argue that can be done directly in SQL too ;-) Not so long ago, I wrote a blog post about how you can use Google Apps Script to schedule BigQuery jobs. You can find that post right here. Go have a read of it now. I promise you'll enjoy it. The post got quite a bit of attention, and I was actually surprised that people actually take the time out to read my drivel. It's clear that BigQuery's popularity is growing fast. I'm seeing more content popping up in my feeds than ever before (mostly from me because that's all I really blog about). However, as awesome as BigQuery is, one glaring gap in its arsenal of weapons is the lack of a built-in job scheduler, or an easy way to do it outside of BigQuery. That said however, I'm pretty sure that the boffins over in Googley-woogley-world are currently working on remedying that - by either adding schedulers to Cloud Functions, or by baking something directly into the BigQuery API itself. Or maybe both? Who knows!
Shine’s TEL group was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader in the community through blogs, local meet up talks, and conference presentations. Every now and then (it started off as being monthly, but that was too much work), we curate all the noteworthy things that Shiners have been up to, and publish a newsletter. Read on for this month's edition.
[caption id="attachment_20558" align="alignright" width="377"]Taumata_Racer.jpg Waterslide analogy. One input, multiple outputs. Each slide represents a date partition in one table.[/caption] Do you have some data that needs to be fed into BigQuery but the output must be split between multiple destination tables? Using a Cloud Dataflow pipeline, you could define some side outputs for each destination table you need, but what happens when you want to write to date partitions in a table and you're not sure what partitions you need to write to in advance? It gets a little messy. That was the problem I encountered, but we have a solution.