databases

  Shine's very own Pablo Caif will be rocking the stage at the very first YOW! Data conference in Sydney. The conference will be running over two days (22-23 Sep) and is focused big data, analytics, and machine learning. Pablo will give his presentation on Google BigQuery,...

At Shine we're big fans of Google BigQuery, which is their flagship big data processing SaaS. Load in your data of any size, write some SQL, and smash through datasets in mere seconds. We love it. It's the one true zero-ops model that we're aware of for grinding through big data without the headache of worrying about any infrastructure. It also scales to petabytes. Although we've only got terabytes, but you've got to start somewhere right? If you haven't yet been introduced to the wonderful world of BigQuery, then I suggest you take some time right after this reading this post to go and check it out. Your first 1TB is free anyway. Bargain! Anyway, back to the point of this post. There have been a lot of updates to BigQuery in recent months, both internally and via features, and I wanted to capture them all in a concise blog post. I won't go into great detail on each of them, but rather give a quick summary of each, which will hopefully give readers a good overview of what's been happening with the big Q lately. I've pulled together a lot of this stuff from various Google blog posts, videos, and announcements at GCP Next 2016 etc.
Databases are the backbone of most modern web applications and their performance plays a major role in user experience. Faster response times - even by a fraction of a second - can be the major deciding factor for most users to choose one option over another. Therefore, it is important to take response rate into consideration whilst designing your databases in order to provide the best possible performance. In this article, I’m going to discuss how to optimise DynamoDB database performance by using partitions.
Quite a while back, Google released two new features in BigQuery. One was federated sources. A federated source allows you to query external sources, like files in Google Cloud Storage (GCS), directly using SQL. They also gave us user defined functions (UDF) in that release too. Essentially, a UDF allows you to ram JavaScript right into your SQL to help you perform the map phase of your query. Sweet! In this blog post, I'll go step-by-step through how I combined BigQuery's federated sources and UDFs to create a scalable, totally serverless, and cost-effective ETL pipeline in BigQuery.

Shine is extremely proud to announce that Pablo Caif has been invited to present at GCP Next 2016, which is Google's largest annual cloud platform event held in San Francisco. Pablo will be presenting on the work Shine have done for Telstra, which involves building solutions on GCP to...

cloud-db.jpg With the current move to cloud computing, the need to scale applications presents itself as a challenge for storing data. If you are using a traditional relational database you may find yourself working on a complex policy for distributing your database load across multiple database instances. This solution will often present a lot of problems and probably won’t be great at elastically scaling. As an alternative you could consider a cloud-based NoSQL database.  Over the past few weeks I have been analysing a few such offerings, each of which promises to scale as your application grows, without requiring you to think about how you might distribute the data and load.

Shine Senior Consultant Ben Teese has had a piece published in the latest DZone Guide to Database and Persistence Management. In the article, Ben does an overview and comparison of the Firebase, Meteor, and Amazon Cognito platforms. These platforms all aim to solve the use-case of...