Data Engineer III (Big Data)

Data Analytics & Engineering Seattle, Washington



  • Apply proven expertise and build high-performance, scalable data warehouse application
  • Securely source external data from numerous global partners
  • Intelligently design data models for optimal storage and retrieval 
  • Deploy inclusive data quality checks to ensure high quality of data 
  • Optimize existing pipelines and implement new ones, maintenance of all domain-related data pipelines
  • Ownership of the end-to-end data engineering component of the solution
  • Collaboration with the program’s SMEs, data scientists

Proficiency in  Big Data stack environments (Hadoop, MapReduce, Hive) 
competence with relational databases (Oracle, MySQL, Vertica) 
experience working with enterprise DE tools (Airflow), ability to learn in-house DE tools
coding and scripting experience with Python, Java, PHP, SQL, CLI


Looking for:

- Big Data Background 

- Having a good balance of all requirements vs indexing too much in one requirement.

- Not too overqualified.