Google Cloud

Intro

Recommendation systems are found under the hood of many popular services and websites. The e-commerce and retail industry use them to increase their sales, the music services provide interesting songs to their listeners, and the news sites rank the daily articles based on their readers interests. If you really think about it, recommendation systems can be used in pretty much every area of daily life. For example, why not automatically recommend better choices to house investors, guide your friends in your hometown without you being around, or suggest which company to apply to if you are looking for a job.

All pretty cool stuff, right!

But, recommendation systems need to be a lot smarter than a plain old vanilla software. In fact, the engine is made up of multiple machine learning modules that aim to rank the items of the interests for the users based on the users preferences and items properties.

In this blog series, you will gain some insight on how recommendation systems work, how you can harness Google Cloud Platform for scalable systems, and the architecture we used when implementing our music recommendation engine on the cloud. This first post will be a light introduction to the overall system, and my follow up articles will subsequently deep dive into each of the machine learning modules, and the tech that powers them.

[caption id="attachment_20558" align="alignright" width="377"]Taumata_Racer.jpg Waterslide analogy. One input, multiple outputs. Each slide represents a date partition in one table.[/caption] Do you have some data that needs to be fed into BigQuery but the output must be split between multiple destination tables? Using a Cloud Dataflow pipeline, you could define some side outputs for each destination table you need, but what happens when you want to write to date partitions in a table and you're not sure what partitions you need to write to in advance? It gets a little messy. That was the problem I encountered, but we have a solution.
Do you recoil in horror at the thought of running yet another mundane SQL script just so a table is automatically rebuilt for you each day in BigQuery? Can you barely remember your name first thing in the morning, let alone remember to click "Run Query" so that your boss gets the latest data refreshed in his fancy Data Studio charts, and then takes all the credit for your hard work? Well, fear not my fellow BigQuery'ians. There's a solution to this madness. It's simple. It's quick. Yes, it's Google Apps Script to the rescue. Disclaimer: all credit for this goes to the one and only Felipe Hoffa. He 'da man!
I have been using BigQuery for over 2 years now at Shine. I've found it to be a great tool that is both incredibly fast and able to handle some of our largest workloads. We are processing terabytes of data per day, and each day an extra billion records are added to the store. But unfortunately this growth is also increasing our costs of running queries. While BigQuery is extremely fast and parallel, it comes at the cost of needing to scan and pay for every record of the columns you are querying. Without the indexes offered by conventional databases, a full table scan is needed for each query. Not only that but when you query large amounts of data the speed of your query slows down:In this post I'll talk about how we used table partitions to increase the performance of our queries and avoid query slowdowns.

"What the Fudge?"

I use Google BigQuery a lot. On a daily basis I run dozens of queries, use it to build massively scalable data pipelines for our clients, and regularly help new users navigating it for the first time. Suffice it to say I'm somewhat accustomed to its little quirks. Unfortunately, the same can't be said for the new users who are commonly left scratching their heads, and shouting "What the fudge!?" at their monitors. Here's the top three WTFs that I regularly hear from new BigQuery users:
OK Google, generate a clickbait title for my Google I/O 2017 blog post I've generated a title, Gareth. What would you like to add next? OK Google, I'm a bit jet lagged - remind me what I saw at Google I/O 2017 I would love to help, Gareth, but I'm going to need a little more information. Would you like that information in chronological order, or grouped by topic?

Will this post interest me?

If you use (or intend to use) Google Cloud Dataflow, you've heard about Apache Beam, or if you're simply bored in work today and looking to waste some time, then yes, please do read on. This short post will cover why our team finally took the plunge to start porting some of Dataflow applications (using the 1.x Java SDKs) to the new Apache Beam model (2.x Java SDK). Spoiler - it has something to do with this. It will also highlight the biggest changes we needed to make when making the switch (pretty much just fix some compile errors).

Setting the scene

A couple of months ago my colleague Graham Polley wrote about how we got started analysing 8+ years worth of of WSPR (pronounced 'whisper') data. What is WSPR? WSPR, or Weak Signal Propagation Reporter, is signal reporting network setup by radio amateurs for monitoring the ability for radio signals to get from one place to another. Why would I care? I’m a geek and I like data. More specifically the things it can tell us about seemingly complex processes. I’m also a radio amateur, and enjoy the technical aspects of  communicating around the globe with equipment I've built myself. [caption id="attachment_17082" align="alignnone" width="300"]Homer simpson at Radio transceiver Homer Simpson as a radio Amateur[/caption]
Do you have an unreasonable fear of cronjobs? Find spinning up VMs to be a colossal waste of your towering intellect? Does the thought of checking a folder regularly for updates fill you with an apoplectic rage? If so, you should probably get some help. Maybe find another line of work. In the meantime, here's one way to ease your regular file processing anxieties. With just one application of Google Cloud Functions, eased gently up your Dataflow Pipeline, you can find lasting relief from troublesome cronjobs.