Data Engineer III

Professional Services San Antonio, Texas Austin, Texas


Rackspace is looking for a  Data Engineer to join our Data Engineering Organization. As an Engineer, you will be working on building and operating company wide data platform, providing data warehouse, analytics and application services to other teams across the organization. You should have a passion for large-scale distributed systems from design to deploy. You’re opinionated about Big Data and data platforms. The platform you build would influence analytics at rackspace and business decisions at rackspace. You make conscious choices and can explain the reasoning behind them or teach a teammate what they need to know to review your code. You are responsible for the decisions you make and the code that you ship.



What You'll Do

  • Write excellent, fully-tested code to build ETL /ELT data pipelines on Cloud.
  • Provide in-depth and always-improving code reviews to your teammates
  • Build cloud data solutions and provide domain perspective on storage, big data platform services, serverless architectures, Hadoop ecosystem, RDBMS, DW/DM, NoSQL databases and security
  • Participate in deep architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the AWS/GCP platform
  • Fix things before they break
  • Troubleshoot and support production, addressing technical debt to improve sustainability
  • Implement product features and refine specifications with our Data scientists, Data Analysts and Data Architects
  • Resolve operational issues by collaborating with upstream support groups and other engineering teams.
  • Work with other teams to reduce friction in the data ingestion and extraction pipelines






Desired Qualities

  • 7 years of development experience building data pipelines  .
  • Bachelor’s Degree or equivalent experience is required. Preferred in Computer Science or related degree
  • Minimum of 3 years of experience in architecture of modern data  Warehousing platforms using technologies such as Big Data and Cloud
  • Must have strong SQL skills, with relevant experience in Hive HQL
  • Utilizing AWS/GCP to move data from on prem servers to the cloud
  • Strong Python development for data transfers and extractions (ELT or ETL)
  • Experience developing and deploying ETL solutions like informatica or similar tools
  • Experience working within an agile development process (Scrum, Kanban, etc)
  • Familiarity with CI/CD concepts
  • Demonstrated proficiency in creating technical documentation
  • Expertise with software build and deployment tools like Jenkins, Maven, rpm, pip