Senior Analytics Engineer

Business Intelligence San Francisco, California


Dino Hunter

Description

Senior Analytics Engineer

Job Description

We’re looking for an innovative, Senior Analytics Engineer who will help us in building out a world-class analytics platform. Work will include extending and hardening our existing platform built on technologies such as Amazon Kinesis, Hadoop, Storm, Cassandra, and Redshift. You will also lead our efforts to redesign our platform to meet Glu's new and evolving data challenges.

The ideal candidate has a strong engineering background and has built robust data platforms and takes complete ownership of their area of expertise. We are passionate about maximizing the value that data and analytics can provide to our business. This is a fantastic opportunity to use your engineering skills to make a material impact on a highly valued analytics platform.

Responsibilities:

  • Devise and engineer applications for the next generation of our analytics platform, supporting Glu's worldwide studios and central functions such as marketing.
  • Maintain, streamline and harden existing data pipelines, from ingestion, through ETL and batch processing.
  • Build and own tightly-engineered back-ends for various data applications.
  • Work with Analytics and Product Management to ensure optimal data design and efficiency
Qualifications:

  • Bachelor's degree in computer science/mathematics/engineering, or other fields with proven engineering experience
  • 5-7 years software engineering experience, especially working on back-end data infrastructure.
  • Proficiency with at least one of the following languages: Java, Python, Scala
  • Experience with SQL and SQL-like languages, especially Hive
  • Extensive experience and knowledge of the Hadoop Ecosystem
  • Basic statistics background (knows correlation from causation, and enjoys arguing about sample sizes)
  • Bonus Points:
  • Experience Building data-rich web applications, especially with technologies like Angular.js and Node.js
  • Knowledge of distributed stream processing technologies such as Storm, Heron, Spark Streaming or Google Dataflow
  • Knowledge of NoSQL application data stores i.e. HBase, Cassandra, DynamoDB, BigTable
#LI-DB1