Sr. Data Engineer
We are looking for cloud Sr. Data Engineer to join our Enterprise Data Engineering team. This is a unique greenfield opportunity to influence a major transformational initiative and implement a modern big data reference architecture that will drive enterprise analytics. The cloud Sr. Data Engineer will be part of a team supporting data pipeline development and management built primarily on the Google Cloud Platform , BigQuery, AWS, S3.
- Define and develop data pipeline solutions– architect end to end design, mapping, and hands-on development - for GCP, following integration, security and development best practices.
- Work with Enterprise Architecture, Data Management, DevOps, QA, Release Management, Change Management teams through the lifecycle of the project and agile sprints for promoting continuous data pipeline releases.
- Manage integration of data from various data sources, both internal and external
- SME on GCP, BQ, Dataflow, Pub Sub, cloud functions, Dataproc
- Follow Incident and Problem Management processes in providing production support, utilizing organizational ITIL processes.
- Work with internal business and technical partners to collect and document the functional and non-functional requirements in JIRA stories for business projects and technical enhancements.
- Collaborate with internal developers and internal stakeholders from e-Commerce, ERP, Salesforce, business team and product development teams in evolving the Enterprise Data Center for Excellence.
- 7+ years of experience implementing data warehousing solutions including public cloud big data platforms.
- A well-grounded data engineer, technically skilled in modern big data principles and technologies.
- Experience with GCP Big Data platform is strongly preferred (BigTable, BigQuery, Airflow, Data Fusion)
- Prefer experience in data pipeline development in SQL, Python, Spark, DataProc, Kafka and event-streaming
- Working knowledge and experience in Google Cloud Data Fusion, Cask, Airflow
- Excellent Python coding and SQL skills
- Familiarity with pub/sub event streaming platforms (e.g. Kafka) is beneficial
- Prior experience developing ETL/ELT solutions for enterprise analytics
- Strong understanding of modern big data technologies
- Good understanding of various data file formats
- Familiar with Confluence, JIRA. Should be comfortable working in Field Mappings between Source, Target Systems, creating Design Documents and following Agile Sprints methodology.
- Support new launch hyper-care and be well-versed with Integration Cutover & Go-Live activities.
- Strong analytical and problem-solving skills with excellent verbal and oral communication is mandatory
- Strong organizational skills with the ability to multi-task, prioritize and execute on assigned deliverables
- Able to work with ambiguity and the ability to troubleshoot and problem solve with minimal supervision and guidance.
- Able to handle fast-paced environment supporting Operations and Engineering activities simultaneously.
- Ability to communicate effectively to all layers with business, technology, peers, and management.