Senior Engineer 1

Technology


Description

Senior Engineer 1

JCPenney India

JCPenney is the shopping destination for diverse, working American families. With inclusivity at its core, the Company’s product assortment meets customers’ everyday needs and helps them commemorate every special occasion with style, quality, and value. JCPenney offers a broad portfolio of fashion, apparel, home, beauty, and jewelry from national and private brands and provides personal services including salon, portrait, and optical. The Company and its 50,000 associates worldwide serve customers where, when, and how they want to shop – from jcp.com to more than 650 stores in the U.S. and Puerto Rico.

The JCPenney Services India office opened in 2016 and is an extension of the JCPenney office in the US. With over 700 employees, the center provides critical functions including technology, e-commerce operations, retail operations, merchandising, and other capabilities.  The JCPSI center is not only an alternative location but critical to JCPenney’s long term strategy. For additional information, please visit jcp.com and follow JCPenney India on LinkedIn and Instagram.

Required Skills:

  • Data engineer with 6+ years of hands on experience working on Big Data Platforms
  • Experience in building and optimizing Big data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization
  • Experience with big data tools: Hadoop, Hdfs, Sqoop ,PySpark, Hive, Kafka, Yarn.
  • Good understanding of Python and Scala ,Spark  programming.
  • Strong technical knowledge of Spark and building Spark pipelines in Python / Scala using (RDD OR, Spark SQL, Data frame)
  • Experience in AWS cloud services like EC2 ,EMR, S3, Redshift etc – Data Lake
  • Experience in working with Airflow(Orchestration) and Jenkins
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with structured and unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • A successful history of manipulating, processing, and extracting value from large disconnected datasets.
  • Should be able to handling different file formats (ORC, AVRO and Parquet ,Json) and unstructured data
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience supporting and working with cross-functional teams in a very fast dynamic environment
  • Experience working in an AGILE environment
  • Proficiency in scripting with shell
  • Experience in taking care of platform stability and availability 24/7 (Production support)

Good To Have Skills :

  • Data Storage solutions : Snowflake
  • Real time data processing and streaming technologies - Apache Kafka streaming with PySpark
  • Data Bricks