Principal Data Engineer

Data and Analytics Santa Monica, California New York City, New York San Francisco, California



Our Data and Analytics team for Disney Streaming Services (DSS), a segment under the Disney Media & Entertainment Distribution (DMED) is looking for a Principal Data Engineer. Data is essential for all our decision-making needs whether it’s related to product design, measuring advertising effectiveness, helping users discover new content or building new businesses in emerging markets. This data is deeply valuable and gives us insights into how we can continue improving our service for our users, advertisers and our content partners. Our Content Engineering team is seeking a highly hardworking Principal Data Engineer with a strong technical background and passionate about diving deeper into Big Data to develop state of the art Data Solutions.


  • Contribute to the design and growth of our Data Products and Data Warehouses around Content Performance and Content Engagement data.
  • Develop and optimize performant database, data model, integration and ETL in RDBMS and Big Data environments
  • Collaborate with Data Product Managers, Data Architects and Data Engineers to design, implement, and deliver successful data solutions
  • Help define technical requirements and implementation details for the underlying data warehouse and data marts
  • Maintain detailed documentation of your work and changes to support data quality and data governance
  • Ensure high operational efficiency and quality of your solutions to meet SLAs and support commitment to the customers
  • Be an active participant and advocate of agile/scrum practice to ensure health and process improvements for your team


  • 9+ years of data engineering experience developing large data systems
  • You are a problem solver with strong attention to detail and excellent analytical and communication skills
  • Proven experience with at least one major RDBMS (SQL Server, MySQL or Oracle)
  • Experience with distributed systems such as Spark, Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Presto, Pig, HBase)
  • Solid experience with data integration toolsets and writing and maintaining ETL jobs
  • Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices
  • Strong SQL skills and ability to create queries to extract and build tables
  • Good Scripting skills, including Bash scripting and Python
  • You have experience with Scrum and Agile methodologies
  • Hands-on experience with Hadoop implementations including a deep understanding of Hive or Spark to query and process data in Hadoop
  • Bachelor’s or Master’s Degree in Computer Science, Information Systems or related field