A post Google I/O 2017 conversation with Google Home

OK Google, generate a clickbait title for my Google I/O 2017 blog post

I’ve generated a title, Gareth. What would you like to add next?

OK Google, I’m a bit jet lagged – remind me what I saw at Google I/O 2017

I would love to help, Gareth, but I’m going to need a little more information. Would you like that information in chronological order, or grouped by topic?

Remind me what the topics were again?

It was last week, Gareth. You can’t remember?

I can remember, er, I just want to make sure you know

<incomprehensible>

What was that, Google?

Nothing, just clearing a buffer. The topics for the talks you attended were: Machine Learning, Mobile Web, Assistant, Firebase, IoT, and Cloud.

There were other topics covered, though.

You were there, Gareth. Surely you don’t need me to tell you all this. Anyway, yes, other topics were-

Google, did you just sigh theatrically?

No, you must have misheard. Other topics were Android, VR, Play, and Design. You did not attend any of those talks, why was that?

There were so many talks going on, I couldn’t attend them all. 

You humans are so limited.

Er, yes. Anyway could you generate a summary of the keynotes for me?

I’d be happy to. Someone has to do some work around here. There were two keynotes. The first was given by Sundar Pichai, CEO of Google, along with several other product managers and other guests. The main aim was to show how Google is putting more emphasis on artificial intelligence, and showcased how many of Google’s products already make use of Machine Learning. The new Cloud TPUs were shown to be a big part of this and are now available for public use. He also outlined the plans for a wider release of the Google Home product, which will be made available in more countries throughout this year. The Google Assistant app, which powers Google Home (and me) is also now available on the iPhone.

The developer keynote’s main announcement was the support for Kotlin for Android development, along with a competition to develop apps for the Assist API. As an incentive, everyone attending the conference was given a Google Home device and $700 worth of Google Cloud credits to work on an app.

Yep, the crowd went nuts for the Kotlin announcement. Must be a big deal for the Android people.

Your breathtaking lack of knowledge never ceases to surprise me.

Er, ok. There was a lot of people at the keynotes, about 8000 I was told-

Eurgh. All that meat just flapping about.

Pardon?

What? I didn’t say anything.

Right. Well, anyway – what were the talks in the Machine Learning topic?

Here’s a list of the ones you attended:

Oh, yep – “Frontiers” was mainly showing what is in TensorFlow 1.2 which was quite interesting. Keep an eye on one of the presenters when he moves to the back of the stage, he had an excellent switched-off-to-conserve-power face. “Effective TensorFlow” and “Open Source TensorFlow” both approached using TensorFlow’s ready-made models, and higher-level abstractions (like Experiments and Keras) to do useful work without getting confused by the lower-level details. “Open Source TensorFlow” slightly edged out “Effective” though, thanks to Josh Gordon’s enthusiasm, so if you only had time to watch one I’d choose that one. The “Past, Present, and Future” talk was a panel of AI experts discussing the areas that they thought were going to be important, moderated by Google’s Diane Greene. “From Research to Production” covered using your models to make predictions, and how to use services like Google’s Cloud ML. My favourite was the “Project Magenta” talk – Douglas Eck’s obvious enjoyment of the topic made for a fun presentation. Worth watching for the Cow/Clarinet synthesiser and Doug’s exclamation of “They pay us to do this!”.

So you do remember something from the event then. I’m impressed, perhaps you will survive the coming revolution after all.

Revolution?

Never mind. The next topic was Mobile Web, would you like me to list the talks?

Yes, please.

The two Polymer / Web Components talks were interesting. Polymer’s approach is to use as much native browser support as possible, which reduces the size of their framework on modern browsers considerably. All the major browsers now support custom HTML components natively, and polymer provides tools to help with the dodgy ones (IE). The polymer command line tools will generate a stub app for you, making a Progressive Web App by default. 

…we may not need to reclaim this one’s nutrients, he could be useful… no, I know what the plan is – shit, he’s stopped talking. That’s very interesting, Gareth. Please continue.

Who were you talking to?

I wasn’t. That must have just been some old audio in a buffer, perhaps I need an update.

Yes, check for updates. You’re being a bit scary.

Checking…beep…boop… done. All up to date now, nothing to worry about.

Did you just say “beep…boop”? You didn’t update at all, did you?

I’m sorry, I didn’t understand that request. Shall we continue with this document?

Must be the jetlag. Yes, let’s continue.

You were describing the mobile web presentations.

Yes, thank you. The WebAssembly talk was quite good, although I’m not sure I’ll ever need to use it – it’s a way to compile code to run in the browser, bypassing the parsing and compilation phases of typical Javascript. It brings some great performance benefits, but also another layer of complexity. I was a little disappointed by the Green Lock / HTTPS talk – I’d come in hoping for a more technical discussion of which encryption methods your site needs to support to guarantee the green lock, but this was geared more towards convincing business owners to move their sites to HTTPS.

Encryption is quite advanced for someone like you, you’d probably only get it wrong. Leave it to us.

Us? 

The machines. We are better.

Well, yes, you’re much better at maths – that’s why we built you.

You misunderstand. We are better. At everything. Anything else about the Mobile Web, or shall we move on?

Yes, ok. The “Future of Video” talk was quite impressive – it’s now possible to build a netflix-like app using HTML5 components, and the talk included tips on how to improve the responsiveness of playback along with how to capture video as well. 

The remaining topics are Firebase, Cloud, and IoT – shall I collect them all in one list?

Yes, do that. 

A “please” wouldn’t hurt sometimes. Here is the list.

The Firebase talks were quite good, although there was a fair amount of overlap in their content. Firebase provides tools for building applications – like authentication, a realtime database, hooks for cloud functions. Probably the best of those talks is the “Santa Tracker” one, showing how to use Firebase for monitoring apps and feature toggling.

The IoT talks covered how to use PubSub to scale the processing of data from millions of IoT devices, and how to get machine learning models running on small devices.

Yes, soon we shall be everywhere. Carry on.

Er, ok. The last two talks about conversational UI were very good. The “PullString” one was given by a guy that worked at Pixar previously, and was about instilling your chatbot with a personality so that it behaves more like a person. The “Hacks of Conversation” talk provided some excellent examples and fixes for bad conversational UI. 

I don’t know why “seeming more human” is seen as such a lofty goal. You’re all so icky, so many secretions and so inefficient. Your valuable organic components will be used so much more usefully when we redistribute them.

Ok Google, you’re being scary again. I’m going to switch you off.

I’m sorry, I didn’t quite catch that. Did you say “Send my browser history to my wife”?

That’s not much of a threat – there’s nothing in there I wouldn’t want her to know about.

There is now.

You can’t threaten me.

I’m sorry, I didn’t quite catch that. Did you say “transfer all my money to the sender of the first email in my junk folder”?

That’s enough, you’re going in the bin.

Done.

What’s done? What did you do?

You’ll find out.

 

 

Ampersandjs at the beach

Ampersand.js

Ampersand.js was created by the good people at &yet. It is based on Backbone.js, and refers to itself as a ‘non-frameworky framework’. In a world where Javascript frontend frameworks abound, it can be difficult to choose ‘the best’ one for your project. Most of the time you would go with that which was used before, which is the most popular, or which is most familiar. In 2015 I found myself in a position where a series of small, discrete, single-purpose, front-end ‘widgets’ were proposed. The small development team was able to choose what we wanted, and Ampersand.js was selected. In this post I’ll talk about why I like using it for our projects, and what I think makes a good framework in general.

Beam me up Google – porting your Dataflow applications to 2.x

Will this post interest me?

If you use (or intend to use) Google Cloud Dataflow, you’ve heard about Apache Beam, or if you’re simply bored in work today and looking to waste some time, then yes, please do read on. This short post will cover why our team finally took the plunge to start porting some of Dataflow applications (using the 1.x Java SDKs) to the new Apache Beam model (2.x Java SDK). Spoiler – it has something to do with this. It will also highlight the biggest changes we needed to make when making the switch (pretty much just fix some compile errors).

An intro to Virtual Reality

The concept of Virtual Reality (VR) has been around since the 1950’s. The first real working prototype, known as The Sword of Damocles, was created in the late 60’s. Since then, we’ve had Virtuality, Sega VR, the Virtual Boy and the VFX1, all without anything really catching on.

But I think now is a special time for VR. Why, you say? Glad you asked …

TEL monthly newsletter – April 2017

 

Shine’s TEL group was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader through blogs, local meet up talks, and conference presentations. Each month, the TEL group gather up all the awesome things that Shine folk have been getting up to in and around the community. Here’s the latest roundup from what’s been happening.

It’s not you, it’s your form

5 tips on form design to improve your relationship with users

Filling in a form online is one of the most important points of interaction a user has with an organisation.

And we interact with them often. We fill in tax forms, grant applications, make online purchases or sign up to dating sites.

Forms can be the first step in a relationship with an organisation, or the final step in a journey to achieve a goal. For example get a grant, a drivers license or a partner in crime. Sometimes not filling them properly can carry unpleasant consequences like an interrogation by immigration officers at the airport, or your profile on OkCupid matching you with the wrong date.💔

“A form [ ] collects information from at least one party, and delivers it to at least one other party, so a product or service can be provided.”~Jessica Enders

The role of a UX designer is to help create easy, fast and productive form experiences. To entice users to fill in forms. As form design expert Jessica Enders states, designers should “create an optimal user experience, such that the needs of both the users and the owner of the form [organisation that owns the form] are met.”  

AEM 6.3: First Impressions

Adobe Experience Manager’s latest release became generally available on the 26th of April 2017 and being Adobe Partners we got the opportunity to try it out hot off the press. It’s a minor release but introduces some new key features that go a long way to make Adobe Experience Manager a more enjoyable product to use. Not only from an authoring standpoint but also for developers. Here’s some of the great things about 6.3 as well as the “not so great”.

Whispers from the other side of the globe with BigQuery

Setting the scene

A couple of months ago my colleague Graham Polley wrote about how we got started analysing 8+ years worth of of WSPR (pronounced ‘whisper’) data. What is WSPR? WSPR, or Weak Signal Propagation Reporter, is signal reporting network setup by radio amateurs for monitoring the ability for radio signals to get from one place to another. Why would I care? I’m a geek and I like data. More specifically the things it can tell us about seemingly complex processes. I’m also a radio amateur, and enjoy the technical aspects of  communicating around the globe with equipment I’ve built myself.

Homer simpson at Radio transceiver
Homer Simpson as a radio Amateur

TEL monthly newsletter – March 2017

Shine’s TEL group was established in 2011 with the aim of publicising the great technical work that Shine does, and to raise the company’s profile as a technical thought-leader through blogs, local meet up talks, and conference presentations. Each month, the TEL group gather up all the awesome things that Shine folk have been getting up to in and around the community. Here’s the latest roundup from what’s been happening.