Staff Data Engineer
Description
Position Description
The Data Engineer will support the modernization of Snap One’s Enterprise Data Warehouse (EDW) ecosystem using the Azure Databricks Platform. This role involves leveraging services such as Azure Data Factory, Azure Fabric, and Data Lakes to ensure a smooth and efficient transition to modern cloud technologies. This position requires hands-on experience across cloud data technologies and engineering practices, focusing on application development, hybrid environments, and cloud data services, with an understanding of governance, security, monitoring, and cost management within the data platform.
The successful candidate will contribute to the engineering of Snap One’s EDW platform, sustaining SQL Server-based solutions while assisting in the platform’s modernization efforts. They will work closely with data teams to support data modeling, engineering, and operational readiness for Snap One’s data-driven applications and business intelligence processes.
Specific Responsibilities
- Assist in the migration of Snap One’s data infrastructure from SQL Server to Azure Databricks, focusing on implementing optimized data pipelines and analytics workflows.
- Monitor the health and reliability of data pipelines, leveraging automated alerting, logging, and key metrics to maintain data quality and system performance.
- Support the creation and maintenance of data governance frameworks, ensuring compliance with regulatory standards using tools like Unity Catalog for data lineage and access control.
- Collaborate on the creation of architecture diagrams, data flows, and technical designs to clearly communicate data solutions and drive seamless implementations.
- Work with product owners, business stakeholders, and leadership to understand analytics and BI requirements, translating them into actionable data solutions.
Required Qualifications
- 5+ years of experience in data engineering, with a strong emphasis on building robust data pipelines using Python for automation and scripting, combined with advanced SQL skills for querying, optimizing, and managing relational databases across cloud platforms like Azure Databricks. Demonstrates the ability to seamlessly integrate Python and SQL for scalable, efficient data workflows and data-driven decision-making.
- Strong understanding of CI/CD pipelines and DevOps best practices for database management, including experience with automated deployment, version control, and testing strategies for data infrastructure. Proficient in using tools like Azure DevOps, Jenkins, or Git for automating data workflows and ensuring continuous integration and delivery in a cloud-based environment.
- Bachelor's degree and 5+ years of experience in designing and implementing modern data architectures, with a strong focus on scalability, performance, and availability. This includes expertise in key architectural components such as data lakes, cloud storage, distributed computing frameworks, and streaming or batch data processing pipelines to enable robust, flexible, and efficient data solutions.
- Hands-on experience with Databricks, including Delta Live, Unity Catalog, and data integration.
- Experience defining system architectures, evaluating technical feasibility, and prototyping applications to deliver scalable and effective solutions.
- Strong written and verbal communication skills with the ability to present technical information clearly to a variety of stakeholders.
- Strong understanding of Kimball’s Data Warehousing Design principles, including when to develop star vs. snowflake schemas based on business needs. Proficient in designing dimension and fact tables, such as slowly changing dimensions and various fact table types (transactional, snapshot, etc.), to create scalable data models.
Preferred Qualifications
- Experience with in-memory solutions like Power BI or Azure Analysis Services.
- Familiarity with core Azure technologies such as Databricks, ADF, SQL, Logic Apps, Data Lakes, and Azure Storage.
- Hands-on experience with APIs and data ingestion tools like Fivetran or HVR.
- Experience working with other cloud platforms such as Google BigQuery or Snowflake.