Why work for Shine
Shine (established in 1998) exists to create a culture and environment for people who are passionate about technology to deliver excellence in software and business outcomes for our clients. We provide our people with a flexible work environment, a great culture, well defined/clear career paths and plenty of learning and training opportunities.
Our culture is deeply technical. We love learning and sharing our experiences with each other, our clients and software communities. We provide time and incentives for our people to share via our blog, Guilds around specific technologies, regular forums in our office, usually accompanied by food and drinks, as well as presenting at meet-ups and conferences.
The Role – Senior Software Engineer | Data Engineer – Melbourne
Technologies such as Snowflake, BigQuery, DataFlow, GCP, AWS are widely used amongst Shine’s client base. We would like to hear from Software Engineers with expertise across these areas and related technologies who are passionate about coding and want to work with a like-minded team.
You will demonstrate:
- Experience with Snowflake: data ingestion, query optimisation, data segregation, ETL, AWS experience and Talend experience an advantage
- Proficiency in understanding data, entity relationships, structured & unstructured data, SQL and NoSQL databases
- Strong data wrangling skills, using a variety/mix of data processing tools (e.g. Apache BEAM/Dataflow, or Hadoop ecosystem/Elastic Map Reduce, etc.)
- Create and maintain data pipelines in an efficient manner (experience with Dataflow with GCP or AWS Data Pipelines is a plus)
- Manage and process complex, large data for analysis (e.g. hierarchies of datasets, inter-relationships, partitions, views, etc.)
- Produce the infrastructure for finest extraction, transformation, and loading of data from a wide variety of data sources using ‘Big Data’ technologies
- Code from program specifications using some of, but not limited to, the following languages/Solutions as required: Python, Java, NodeJS, SQL, etc.
- Work closely with analytics and data science teams to build and optimise ‘big data’ pipelines, warehouse/data lake and reports/visualisations
- Familiarity with Cloud based messaging, scheduling and triggering products & services (e.g. Pub/Sub, CloudScheduler, SQS, etc.)
If you’re interested in this role, please apply here