machine learning

Picture Wondering what DDD stands for? Well, DDD stands for Developers Developers Developers! (presumably taken from this famous Steve Ballmers on-stage chant) It is an inclusive, non-profit conference for the software community. This year, DDD Melbourne was held on 15th September 2018 at Town Hall in Melbourne CBD. It was a one-day conference which started at 9:00am and concluded at 5:15pm. Personally, I thought the conference was very well-organized and at $79, it was affordable and being held on a Saturday meant I didn’t have to talk a day off work either. Based on what you fancy, there were several talks to choose from. The agenda, which was finalised after attendees voted on the talks, can be found here.   
Weather forecast is a complicated process. If you live in an area with lots of oscillation in weather like us in Melbourne, you should always give some chance for the weather to be different from what you see on websites. The weather is typically forecasted by first gathering a lot of information about the atmosphere, humidity, wind, etc. and then relying on our atmospheric knowledge and a physical model to articulate changes in the near future. But due to our limited understanding of the physical model and the chaotic nature of the atmosphere, it might be unreliable. Instead of the common approach for this, here we try to scrutinise the idea of entrusting a machine learning model for this purpose. We expect the model to look at the historical data and get a feeling of how the temperature will change in near future, let's say tomorrow.
Due to me being kind of a big deal around here, I was sent to Google Next 18 last week. It's a two-and-a-half-day conference in San Francisco, all about Google Cloud. I made some exciting discoveries, which I will share with you, and also went to some talks or something.
In this blog series so far, I have presented the concepts behind a music recommendation engine, a music recommendation model for TensorFlow, and a GCP architecture to make it accessible via the web. The end result has been an ML model wrapped in a stand-alone service to give you predictions on-demand. Before diving further into implementing more complicated ML models, I thought it would first be worth looking into how we could deploy our TensorFlow model into AWS. After some investigation, I've concluded that the better way is to use Lambda functions. In this post, I'll explain why that's the case, how you can do it, and an interesting pain point you have to keep in mind. Let's break the new ground! headphones-man-music-374777.jpg

Warning: This post contains pictures of spiders (and Spiderman)!

Google’s new Cloud AutoML Vision is a new machine learning service from Google Cloud that aims to make state of the art machine learning techniques accessible to non-machine learning experts. In this post I will show you how I was able, in just a few hours, to create a custom image classifier that is able to distinguish between different types of poisonous Australian spiders. I didn’t have any data when I started and it only required a very basic understanding of machine learning related concepts. I could probably show my Mum how to do it!
In parts 1 and 2 of this blog series, we've seen how to implement an item-similarity model in TensorFlow, and the intuition behind various recommender models. It's now time to have a high-level view of a recommendation project in the Google Cloud Platform. This will encompass all of our plumbing for the web service, so that it can be up and available on the web. I will outline two possible architectures - one where we deploy and manage TensorFlow ourselves using the Google Kubernetes Engine (GKE) , and the other using the fully-managed Cloud Machine Learning Engine (MLE).  You'll also find how to communicate with the ML engine modules, and how to configure your computational clusters.
pexels-photo-258291.jpeg In part 1, we learnt about recommendation engines in general, and looked at ways to implement a service using the Google Cloud Platform (GCP). In part 2 of the blog series, we are getting our hands dirty on the item-similarity model and TensorFlow implementation of it. This is our first technical blog of the series. Here, I deep dive into the data processing step, the recommendation service, and some hints on how to optimise the code to have real-time responses. You should expect to know how to build a simple item-similarity recommender engine by the end of this blog. So let's get the party started!

Intro

Recommendation systems are found under the hood of many popular services and websites. The e-commerce and retail industry use them to increase their sales, the music services provide interesting songs to their listeners, and the news sites rank the daily articles based on their readers interests. If you really think about it, recommendation systems can be used in pretty much every area of daily life. For example, why not automatically recommend better choices to house investors, guide your friends in your hometown without you being around, or suggest which company to apply to if you are looking for a job.

All pretty cool stuff, right!

But, recommendation systems need to be a lot smarter than a plain old vanilla software. In fact, the engine is made up of multiple machine learning modules that aim to rank the items of the interests for the users based on the users preferences and items properties.

In this blog series, you will gain some insight on how recommendation systems work, how you can harness Google Cloud Platform for scalable systems, and the architecture we used when implementing our music recommendation engine on the cloud. This first post will be a light introduction to the overall system, and my follow up articles will subsequently deep dive into each of the machine learning modules, and the tech that powers them.