TEL monthly newsletter – Feb 2017

The TEL group was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader through blogs, local meet up talks, and conference presentations. Each month, the TEL group gather up all the awesome things that Shine folk have been getting up to in and around the community.  Here’s the latest roundup:

Gobbling up big-ish data for lunch using BigQuery

Beers + ‘WSPR’ = fun

To this day, I’m a firm believer in the benefits of simple, informative, and spontaneous conversations with my colleagues – at least with the ones who can stand me long enough to chat with me . Chewing the fat with other like minded folks over a beer or two is a bloody good thing. It’s how ideas are born, knowledge is shared, and relationships are formed. It’s an important aspect of any business that is sadly all too often overlooked.

TEL highlights for 2016

tel-high-res

Shine’s Technical Excellence Leadership Group (TEL) has had a stellar year! In this post we’ve pulled together our top picks from 2016 that we think deserve a special shout out before the year comes to a close. But first, a quick recap on what the TEL group actually is.

TEL was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader through blogs, local meet up talks, and conference presentations. TEL is allocated a yearly budget from the super-duper generous Shine directors, and the members of the TEL group are put in charge of overseeing how it is spent.

The budget comprises two parts: money and time. The monetary portion of the budget goes to prizes and bonuses for producing material. The time portion is for staff to draw upon to get away from their day-to-day work commitments and to produce their material. So, now that you know what TEL is all about, let’s have a look at the highlight reel from 2016 shall we?

Analysing Stack Overflow comment sentiment using Google Cloud Platform

The decline of Stack Overflow?

A few months back I read this post from 2015 (yes, I know I’m a little late to the party) about how Stack Overflow (SO) was in serious decline, and heading for total and utter oblivion.  In the post, the first item to be called  out was that SO “hated new users“:

Stack Overflow has always been a better-than-average resource for finding answers to programming questions. In particular, I have found a number of helpful answers to really obscure questions on the site, many of which helped me get past a road block either at work or in my hobby programming. As such, I decided I’d join the site to see if I could help out. Never before has a website given me a worse first impression.

At the time, I remember thinking that this seemed like somewhat of an unfair statement. That was mostly down to the fact that when I joined the community (many years ago), I had fond memories of a smooth on-boarding, and never experienced any snarky remarks on my initial questions. Yes, gaining traction for noobs is very, very hard, but there is a good reason why it exists.

For me, SO is invaluable. How else would I be able to pretend to know what I’m doing? How else could I copy and paste code from some other person who’s obviously a lot smarter than me, and take all the credit for it? Anyway, once I had read the post, and gotten on with my life (e.g. copying and pasting more code from SO), I did’t think too much more about the post. Maybe I had just been lucky with my foray into the SO community?

However, just last week, I was reminded of that post once again, when I noticed that BigQuery (BQ) now has a public dataset which includes all the data from SO – including user comments and answers. Do you see where I am going with this yet? If not, then don’t worry. Neither did I when I started writing this.

Shiner to present at very first YOW!Data conference

 

Shine’s very own Pablo Caif will be rocking the stage at the very first YOW! Data conference in Sydney. The conference will be running over two days (22-23 Sep) and is focused big data, analytics, and machine learning. Pablo will give his presentation on Google BigQuery, along with a killer demo of it in action. You can find more details of his talk here.

Google BigQuery hits the gym and beefs up!

At Shine we’re big fans of Google BigQuery, which is their flagship big data processing SaaS. Load in your data of any size, write some SQL, and smash through datasets in mere seconds. We love it. It’s the one true zero-ops model that we’re aware of for grinding through big data without the headache of worrying about any infrastructure. It also scales to petabytes. Although we’ve only got terabytes, but you’ve got to start somewhere right?

If you haven’t yet been introduced to the wonderful world of BigQuery, then I suggest you take some time right after this reading this post to go and check it out. Your first 1TB is free anyway. Bargain!

Anyway, back to the point of this post. There have been a lot of updates to BigQuery in recent months, both internally and via features, and I wanted to capture them all in a concise blog post. I won’t go into great detail on each of them, but rather give a quick summary of each, which will hopefully give readers a good overview of what’s been happening with the big Q lately. I’ve pulled together a lot of this stuff from various Google blog posts, videos, and announcements at GCP Next 2016 etc.

Creating a serverless ETL nirvana using Google BigQuery

Quite a while back, Google released two new features in BigQuery. One was federated sources. A federated source allows you to query external sources, like files in Google Cloud Storage (GCS), directly using SQL. They also gave us user defined functions (UDF) in that release too. Essentially, a UDF allows you to ram JavaScript right into your SQL to help you perform the map phase of your query. Sweet!

In this blog post, I’ll go step-by-step through how I combined BigQuery’s federated sources and UDFs to create a scalable, totally serverless, and cost-effective ETL pipeline in BigQuery.

Pablo rocking the stage at Google’s annual cloud event!

Last week, Shine’s very own Pablo Caif gave a presentation at GCP Next 2016 in San Francisco, which is Google’s largest annual cloud platform event. Pablo delivered an outstanding talk on the work Shine have done for Telstra, which involves building solutions on the GCP stack to manage and analyse their massive datasets. More specifically, the talk focused around two of Google’s core big data products –BigQuery & Cloud Dataflow.

Messages in the sky

contrailscience.com_skitch_skitched_20130315_131709

One of the projects that I’m currently working on is developing a solution whereby millions of rows per hour are streamed real-time into Google BigQuery. This data is then available for immediate analysis by the business. The business likes this. It’s an extremely interesting, yet challenging project. And we are always looking for ways of improving our streaming infrastructure.

As I explained in a previous blog post, the data/rows that we stream to BigQuery are ad-impressions, which are generated by an ad-server (Google DFP). This was a great accomplishment in its own right, especially after optimising our architecture and adding Redis into the mix. Using Redis added robustness, and stability to our infrastructure.  But – there is always a but – we still need to denormalise the data before analysing it.

In this blog post I’ll talk about how you can use Google Cloud Pub/Sub to denormalize your data in real-time before performing analysis on it.

Google Cloud Dataproc and the 17 minute train challenge

multiple-seats

My work commute

My commute to and from work on the train is on average 17 minutes. It’s the usual uneventful affair, where the majority of people pass the time by surfing their mobile devices, catching a few Zs, or by reading a book. I’m one of those people who like to check in with family & friends on my phone, and see what they have been up to back home in Europe, while I’ve been snug as a bug in my bed.

Stay with me here folks.

But aside from getting up to speed with the latest events from back home, I also like to catch up on the latest tech news, and in particular what’s been happening in the rapidly evolving cloud area. And this week, one news item in my AppyGeek feed immediately jumped off the screen at me. Google have launched yet another game-changing product into their cloud platform big data suite.

It’s called Cloud Dataproc.