Senior Cloud Data Engineer

Information Technology San Jose, California San Francisco, California


Job Description: That’s a cool job - I want it!

Ready to shake things up? Join us as we pursue our disruptive new vision to make machine data accessible, usable and valuable to everyone. We are a company filled with people who are passionate about our product and strive to deliver the best experience for our customers. At Splunk, we’re committed to our work, customers, having fun, and most significantly to each other’s success. We continue to be on a tear while enjoying incredible growth year over year.

As a Cloud Data Engineer, you should be an expert with data warehousing technical components (e.g., ETL, ELT, Cloud Databases and Reporting), infrastructure (e.g. hardware and software) and their integration. You should have deep understanding of the architecture for enterprise level data lake solutions using multiple platforms (RDBMS, Columnar, Cloud). You should be an expert in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. The individual is expected to be able to build efficient, flexible, extensible, and scalable ETL and reporting solutions.

Responsibilities: I want to and can do that!

  • Design and develop framework to automate data ingestion and integration of structured data from a wide variety of enterprise data sources, at scale.
  • Design and develop data pipeline components and integrate them with the Splunk and other ETL Platforms.
  • Design data quality monitoring and automated data cleaning.
  • Assist the business liaison and ETL function with data related issues such as assessing data quality, data consolidation, evaluating existing data sources, etc.
  • Experience with handling large data infrastructure platform and driving stability through automated monitoring, alerting, and actions.
  • Experience developing for, configuring, and supporting Cloud computing solutions

Requirements: I’ve already done that or have that!

  • Experience with building scalable and reliable data pipelines using Data engine technologies like APIs, AWS Redshift, Snowflake, Talend.
  • 8+ years of experience with and detailed knowledge of AWS based data warehouse technical architectures, data modeling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding.
  • 4+ years experience designing and developing complex ETL/ELT programs with Python, Visual ETL Tools etc
  • 3+ years experience developing complex SQL
  • Experience using Cloud Storage and computing technologies such as RedShift, Snowflake
  • 3+ years experience programming in Python
  • 2+ years experience with Bitbucket
  • 2+ years experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues
  • 2+ years implementing and programming data ingestion and ETL programs with large datasets (Terabytes sized analytical environment)
  • Experience with API based integration from multiple SaaS data sources
  • Experience developing and implementing streaming data ingestion solutions
  • Experience in Agile methodology (2+ years)

Education: Got it!

  • Bachelor degree in Computer Science or related field

We value diversity at our company. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or any other applicable legally protected characteristics in the location in which you are applying.

For job positions in San Francisco, CA, and other locations where required, we will consider for employment qualified applicants with arrest and conviction records.

 

Thank you for your interest in Splunk!