Analytics Engineer
Description
What You’ll Do
- Design & build robust data pipelines to ingest, transform, and model data efficiently.
- Develop and maintain a semantic layer to define consistent, reusable business metrics.
- Implement SQLMesh (or dbt) for transformation workflows and data versioning.
- Work with Cube (or similar) to create performant, queryable metrics for dashboards and applications.
- Collaborate with stakeholders to understand reporting needs and define clear KPIs.
- Optimize data models for performance, scalability, and cost efficiency.
- Ensure data governance and quality by implementing testing, monitoring, and documentation.
- Integrate data from multiple sources into our Redshift data warehouse.
- Improve query performance and ensure real-time analytics capabilities as needed.
What We’re Looking For
- 3+ years of experience in Analytics Engineering, Data Engineering, or BI Engineering.
- Strong SQL skills and experience with data modelling (Star/Snowflake schema).
- Hands-on experience with dbt, SQLMesh, or similar transformation frameworks.
- Knowledge of Cube, LookML, MetricFlow, or other semantic layer tools.
- Experience working with data warehouses (BigQuery, Snowflake, Redshift, etc.).
- Familiarity with ETL/ELT processes and workflow orchestration (Airflow, Dagster).
- Strong understanding of data governance, testing, and version control (e.g., CI/CD for data).
- Ability to work closely with business teams to define meaningful and actionable metrics.
- Excellent communication skills and a passion for scalable, high-quality data systems.
Nice to Have
- Experience with streaming data pipelines (Kafka, Flink, etc.)
- Familiarity with Python for data engineering
- Experience with data visualization tools (Metabase, Tableau, Power BI, Looker)