Google BigQuery Tag

Some background

When we started using Google BigQuery - almost five years ago now - it didn't have any partitioning functionality built into it.  Heck, queries cost $20 p/TB back then too for goodness' sake!  To compensate for this lack of functionality and to save costs, we had to manually shard our tables using the well known _YYYYMMDD suffix pattern just like everyone else.  This works fine, but it's quite cumbersome, has some hard limits, and your SQL can quickly becomes unruly. Then about a year ago, the BigQuery team released ingestion time partitioning.  This allowed users to partition tables based on the load/arrival time of the data, or by explicitly stating the partition to load the data into (using the $ syntax).  By using the _PARTITIONTIME pseudo-column, users were more easily able to craft their SQL, and save costs by only addressing the necessary partition(s).  It was a major milestone for the BigQuery engineering team, and we were quick to adopt it into our data pipelines.  We rejoiced and gave each other a lot of high-fives.

Shine's good friend Felipe Hoffa from Google was in Melbourne recently, and he took the time to catch up with our resident Google Developer Expert, Graham Polley. But, instead of just sitting down over a boring old coffee, they decided to take an iconic tram ride...

Intro

Post update: My good friend Lak over at Google has come up with a fifth option! He suggests using Cloud Dataprep to achieve the same. You can read his blog post about that over here. I had thought about using Dataprep, but because it actually spins up a Dataflow job under-the-hood, I decided to omit it from my list. That's because it will take a lot longer to run (the cluster needs to spin up and it issues export and import commands to BigQuery), rather than issuing a query job directly to the BigQuery API. Also, there are extra costs involved with this approach (the query itself, the Dataflow job, and a Dataprep surcharge - ouch!). But, as Lak pointed out, this would be a good solution if you want to transform your data, instead of issuing a pure SQL request. However, I'd argue that can be done directly in SQL too ;-) Not so long ago, I wrote a blog post about how you can use Google Apps Script to schedule BigQuery jobs. You can find that post right here. Go have a read of it now. I promise you'll enjoy it. The post got quite a bit of attention, and I was actually surprised that people actually take the time out to read my drivel. It's clear that BigQuery's popularity is growing fast. I'm seeing more content popping up in my feeds than ever before (mostly from me because that's all I really blog about). However, as awesome as BigQuery is, one glaring gap in its arsenal of weapons is the lack of a built-in job scheduler, or an easy way to do it outside of BigQuery. That said however, I'm pretty sure that the boffins over in Googley-woogley-world are currently working on remedying that - by either adding schedulers to Cloud Functions, or by baking something directly into the BigQuery API itself. Or maybe both? Who knows!
Do you recoil in horror at the thought of running yet another mundane SQL script just so a table is automatically rebuilt for you each day in BigQuery? Can you barely remember your name first thing in the morning, let alone remember to click "Run Query" so that your boss gets the latest data refreshed in his fancy Data Studio charts, and then takes all the credit for your hard work? Well, fear not my fellow BigQuery'ians. There's a solution to this madness. It's simple. It's quick. Yes, it's Google Apps Script to the rescue. Disclaimer: all credit for this goes to the one and only Felipe Hoffa. He 'da man!

"What the Fudge?"

I use Google BigQuery a lot. On a daily basis I run dozens of queries, use it to build massively scalable data pipelines for our clients, and regularly help new users navigating it for the first time. Suffice it to say I'm somewhat accustomed to its little quirks. Unfortunately, the same can't be said for the new users who are commonly left scratching their heads, and shouting "What the fudge!?" at their monitors. Here's the top three WTFs that I regularly hear from new BigQuery users:

Will this post interest me?

If you use (or intend to use) Google Cloud Dataflow, you've heard about Apache Beam, or if you're simply bored in work today and looking to waste some time, then yes, please do read on. This short post will cover why our team finally took the plunge to start porting some of Dataflow applications (using the 1.x Java SDKs) to the new Apache Beam model (2.x Java SDK). Spoiler - it has something to do with this. It will also highlight the biggest changes we needed to make when making the switch (pretty much just fix some compile errors).

Setting the scene

A couple of months ago my colleague Graham Polley wrote about how we got started analysing 8+ years worth of of WSPR (pronounced 'whisper') data. What is WSPR? WSPR, or Weak Signal Propagation Reporter, is signal reporting network setup by radio amateurs for monitoring the ability for radio signals to get from one place to another. Why would I care? I’m a geek and I like data. More specifically the things it can tell us about seemingly complex processes. I’m also a radio amateur, and enjoy the technical aspects of  communicating around the globe with equipment I've built myself. [caption id="attachment_17082" align="alignnone" width="300"]Homer simpson at Radio transceiver Homer Simpson as a radio Amateur[/caption]

Beers + 'WSPR' = fun

To this day, I'm a firm believer in the benefits of simple, informative, and spontaneous conversations with my colleagues - at least with the ones who can stand me long enough to chat with me . Chewing the fat with other like minded folks over a beer or two is a bloody good thing. It's how ideas are born, knowledge is shared, and relationships are formed. It's an important aspect of any business that is sadly all too often overlooked.

The decline of Stack Overflow?

A few months back I read this post from 2015 (yes, I know I'm a little late to the party) about how Stack Overflow (SO) was in serious decline, and heading for total and utter oblivion.  In the post, the first item to be called  out was that SO "hated new users":
Stack Overflow has always been a better-than-average resource for finding answers to programming questions. In particular, I have found a number of helpful answers to really obscure questions on the site, many of which helped me get past a road block either at work or in my hobby programming. As such, I decided I’d join the site to see if I could help out. Never before has a website given me a worse first impression.
At the time, I remember thinking that this seemed like somewhat of an unfair statement. That was mostly down to the fact that when I joined the community (many years ago), I had fond memories of a smooth on-boarding, and never experienced any snarky remarks on my initial questions. Yes, gaining traction for noobs is very, very hard, but there is a good reason why it exists. For me, SO is invaluable. How else would I be able to pretend to know what I'm doing? How else could I copy and paste code from some other person who's obviously a lot smarter than me, and take all the credit for it? Anyway, once I had read the post, and gotten on with my life (e.g. copying and pasting more code from SO), I did't think too much more about the post. Maybe I had just been lucky with my foray into the SO community? However, just last week, I was reminded of that post once again, when I noticed that BigQuery (BQ) now has a public dataset which includes all the data from SO - including user comments and answers. Do you see where I am going with this yet? If not, then don't worry. Neither did I when I started writing this.