Shine’s TEL group was established in 2011, initially as a three-piece old-school-bebop Jazz combo but expanded to include a horn section during our worldwide tour of Iceland. We publicise the great technical work that Shine does, and raise the company’s profile as a technical thought-leader in the community through blogs, local meetup talks, conference presentations, and driving around shouting out of car windows. We curate all the noteworthy things that Shiners have been up to and publish a newsletter that nobody reads.
Shine’s TEL group was established in 2011, initially to share jam-making recipes. We publicise the great technical work that Shine does, and raise the company’s profile as a technical thought-leader in the community through blogs, local meetup talks, and conference presentations. We curate all the noteworthy things that Shiners have been up to and publish a newsletter, this very one that you’re reading right now. Hey, when you read this, whose voice do you hear in your head? Is it mine? Or yours? Everything I read is in Frank Walker from National Tiles’ voice, please help me. Read on for this edition.
Shine’s TEL group was established in 2011, initially as a money-laundering operation. We publicise the great technical work that Shine does, and raise the company’s profile as a technical thought-leader in the community through blogs, local meetup talks, and conference presentations. We curate all the noteworthy things that Shiners have been up to and publish a newsletter, in accordance with a mystical schedule that you wouldn’t understand. Read on for this edition.
When we started using Google BigQuery – almost five years ago now – it didn’t have any partitioning functionality built into it. Heck, queries cost $20 p/TB back then too for goodness’ sake! To compensate for this lack of functionality and to save costs, we had to manually shard our tables using the well known
_YYYYMMDD suffix pattern just like everyone else. This works fine, but it’s quite cumbersome, has some hard limits, and your SQL can quickly becomes unruly.
Then about a year ago, the BigQuery team released ingestion time partitioning. This allowed users to partition tables based on the load/arrival time of the data, or by explicitly stating the partition to load the data into (using the
$ syntax). By using the
_PARTITIONTIME pseudo-column, users were more easily able to craft their SQL, and save costs by only addressing the necessary partition(s). It was a major milestone for the BigQuery engineering team, and we were quick to adopt it into our data pipelines. We rejoiced and gave each other a lot of high-fives.
As a co-organizer for GDG Cloud Melbourne, I was recently invited to the Google Cloud Developer Community conference in Sunnyvale, California. It covered meetup organization strategies and product roadmaps, and was also a great opportunity to network with fellow organizers and Google Developer Experts (GDEs) from around the world. Attending were 68 community organizers, 50 GDEs and 9 open source contributors from a total of 37 countries.
I would have to say it was the most social conference I have ever attended. There were a lot of opportunities to meet with people from a wide range of backgrounds. I also got many valuable insights into how I could better run our meetup and better make use of Google products. In this post I’ll talk about what we got up to over the two days.
Shine’s TEL group was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader in the community through blogs, local meet up talks, and conference presentations. Every now and then (it started off as being monthly, but that was too much work), we curate all the noteworthy things that Shiners have been up to, and publish a newsletter. Read on for this month’s edition.
January 25, 2018
Shine Solutions has built for EnergyAustralia one of the first Amazon Alexa “skills” in the Australian market
EnergyAustralia is among the first Australian-based organisations to feature on the highly-anticipated smart speaker due to arrive in Australia next month.
The skill has been designed to enable customers to gain easy access to their EnergyAustralia accounts and provide users with the ability to better manage their energy usage.
The cloud-powered service can perform a range of tasks in response to voice commands. Users will be able to ask the ever-efficient digital assistant such questions as:
“Alexa, ask EnergyAustralia how much is my latest bill?”
“Alexa, ask EnergyAustralia, when is my account due?”
“Alexa now provides EnergyAustralia’s customers with another way to easily engage with us” says Tony Robertshaw, EnergyAustralia’s Head of Digital and Incubation. “Our skill on Alexa will provide a more closely integrated customer experience and we are thrilled with the result. Shine has been instrumental in launching our first skill.
The Shine team delivered the outcomes we were seeking in an incredibly short timeframe and worked seamlessly with all the stakeholders involved. EnergyAustralia has worked closely with Shine for over 15 years now – they are a valued digital partner.”
Chatbot technology is certainly in its growth phase in Australia and the expected growth trajectory is significant says Shine Director Luke Alexander. “Voice-driven systems are becoming integral to our everyday lives. The potential for companies to forge closer relationships with customers through this technology is exciting – we are proud to partner with EnergyAustralia to help launch the organisation in this space.”
About Shine Solutions:
Shine has been at the forefront of developing enterprise software for 20 years. We are committed to working in partnership with our clients to devise and deliver digital solutions for their business needs. Since launching in 1998, Shine Solutions has forged long-term partnerships with some of Australia’s leading organisations including EnergyAustralia, Telstra, National Australia Bank and Coles.
Shine has offices in Melbourne and Sydney.
Luke Alexander, Director
Shine’s good friend Felipe Hoffa from Google was in Melbourne recently, and he took the time to catch up with our resident Google Developer Expert, Graham Polley. But, instead of just sitting down over a boring old coffee, they decided to take an iconic tram ride around the city. To make it even more interesting, they tested out some awesome Google Cloud technologies by using their phones to spin up a Cloud Dataflow cluster of 50 VMs, and process over 10 billion records of data in under 10 minutes! Check out the video they recorded:
Post update: My good friend Lak over at Google has come up with a fifth option! He suggests using Cloud Dataprep to achieve the same. You can read his blog post about that over here. I had thought about using Dataprep, but because it actually spins up a Dataflow job under-the-hood, I decided to omit it from my list. That’s because it will take a lot longer to run (the cluster needs to spin up and it issues export and import commands to BigQuery), rather than issuing a query job directly to the BigQuery API. Also, there are extra costs involved with this approach (the query itself, the Dataflow job, and a Dataprep surcharge – ouch!). But, as Lak pointed out, this would be a good solution if you want to transform your data, instead of issuing a pure SQL request. However, I’d argue that can be done directly in SQL too 😉
Not so long ago, I wrote a blog post about how you can use Google Apps Script to schedule BigQuery jobs. You can find that post right here. Go have a read of it now. I promise you’ll enjoy it. The post got quite a bit of attention, and I was actually surprised that people actually take the time out to read my drivel.
It’s clear that BigQuery’s popularity is growing fast. I’m seeing more content popping up in my feeds than ever before (mostly from me because that’s all I really blog about). However, as awesome as BigQuery is, one glaring gap in its arsenal of weapons is the lack of a built-in job scheduler, or an easy way to do it outside of BigQuery.
That said however, I’m pretty sure that the boffins over in Googley-woogley-world are currently working on remedying that – by either adding schedulers to Cloud Functions, or by baking something directly into the BigQuery API itself. Or maybe both? Who knows!