Senior Data Engineer

Technical Kings Langley, Hertfordshire Glasgow, Scotland


Description

Do you want to work to make Power for Good?

We're the world's largest independent renewable energy company. We're driven by a simple yet powerful vision: to create a future where everyone has access to affordable, zero carbon energy.

We know that achieving our ambitions would be impossible without our people. Because we're tackling some of the world's toughest problems, we need the very best people to help us. They're our most important asset so that's why we continually invest in them.

RES is a family with a diverse workforce, and we are dedicated to the personal professional growth of our people, no matter what stage of their career they're at. We can promise you rewarding work which makes a real impact, the chance to learn from inspiring colleagues from across a growing, global network and opportunities to grow personally and professionally.

Our competitive package offers rewards and benefits including pension schemes, flexible working, and top-down emphasis on better work-life balance. We also offer private healthcare, discounted green travel, 25 days holiday with options to buy/sell days, enhanced family leave and four volunteering days per year so you can make a difference somewhere else.

The position

We are looking for a Senior Data Engineer with advanced expertise in Databricks to lead the development of scalable data solutions across in our asset performance management software, within our Digital Solutions business.

This role involves architecting complex data pipelines, mentoring junior engineers, and driving best practices in data engineering and cloud analytics. You will play a key role in shaping our data strategy which is the backbone of our software and enabling high-impact analytics and machine learning initiatives.

Accountabilities

  • Design and implement scalable, high-performance data pipelines.
  • Work with the lead cloud architect on the design of data lakehouse solutions leveraging Delta Lake and Unity Catalog.
  • Collaborate with cross-functional teams to define data requirements, governance standards, and integration strategies.
  • Champion data quality, lineage, and observability through automated testing, monitoring, and documentation.
  • Mentoring and guidance of junior data engineers. Using you passion for data engineering to foster a culture of technical excellence and continuous learning.
  • Driving the adoption of CI/CD and DevOps practices for data engineering workflows.
  • Stay ahead of emerging technologies and Databricks platform updates, evaluating their relevance and impact.

Knowledge[RP1][RP2][CM3]

  • Deep understanding of distributed data processing, data lakehouse architecture, and cloud-native data platforms.
  • Optimization of data workflows for performance, reliability, and cost-efficiency on cloud platforms (particularly Azure but experience with AWS and/or GCP would be beneficial).
  • Strong knowledge of data modelling, warehousing, and governance principles.
  • Knowledge of data privacy and compliance standards (e.g., GDPR, HIPAA).
  • Understanding of OLTP and OLAP and what scenarios to deploy them in.
  • Understanding of incremental processing patterns.

Skills

  • Strong proficiency in Python and SQL. Experience of working with Scala would be beneficial.
  • Proven ability to design and optimize large-scale ETL/ELT pipelines.
  • Building and managing orchestrations.
  • Excellent oral and written communication, both within the team and with our stakeholders.

Experience

  • 5+ years of experience in data engineering, with at least 2 years working extensively with Databricks and orchestrated pipelines” such as DBT, DLT, or workflows using jobs.
  • Experience with Delta Lake and Unity Catalog in production environments.
  • Experience with CI/CD tools and version control systems (e.g., Git, GitHub Actions, Azure DevOps[RP4], Databricks Asset Bundles).
  • Experience with real-time data processing, both batch and streaming.
  • Experience of working on machine learning workflows and integration with data pipelines.
  • Experience leading data engineering projects with distributed teams, ideally in a cross functional environment.

Qualifications

  • Databricks Certified Data Engineer Professional or equivalent certification.

At RES we celebrate difference as we know it makes our company a great place to work. Encouraging applicants with different backgrounds, ideas and points of view, we create teams who work together to solve complex problems and design practical solutions for our clients. Our multiple perspectives come from many sources including the diverse ethnicity, culture, gender, nationality, age, sex, sexual orientation, gender identity and expression, disability, marital status, parental status, education, social background and life experience of our people.

#LI-GF1


[RP1]@Callum McLean, something about understanding the difference between OLTP and OLAP (i.e. transactional systems vs analytical systems) would be really helpful to call out.

[RP2]Another point would be them understanding incremental processing patterns. Also, experience with both batch and streaming.

[CM3]Added. I’ve added batch and streaming to the real time data processing experience. Shout if that is not the right place.

[RP4]@Callum McLean, also “Databricks Asset Bundles” (the CI/CD framework in Databricks)