Big Data Engineer
Description
Responsibilities
- Participate in our product development from ideation to deployment and beyond
- Create ground breaking new features for our customers, and for internal use
- Work with our data teams to make our products faster, smarter and more intuitive to use
- Maintain and help optimise existing systems and applications
- Lead a small team of Individual contributors to deliver business needs.
Requirements
- You have at least 3 to 7 years of software engineering experience with recent work on Big Data technologies, preferably Hadoop and Spark
- You are experienced at developing scalable and reliable cloud based applications
- Strong programming skills in at least one of the popular languages - Scala/Java/Python
- Strong hands-on experience with cloud infrastructure (AWS - S3, Redshift etc.) and cloud native architecture
- Understanding of TDD, and Continuous Integration and Continuous Delivery
- Experience in relational and document databases preferably MySQL, PostgreSQL and/or MongoDB. Knowledge of other systems such as Kafka, Redis, etc. is a plus
- Working knowledge of various message formats such as XML, JSON, Avro, Parquet etc.
- Understanding of product development lifecycle and version control (preferable Maven and Git)
- You should have excellent communication and presentation skills
- You should have demonstrated ability to work in an agile environment