Sr. Big Data Engineer
Location: Atlanta, GA
Zscaler enables the world’s leading organizations to securely transform their networks and applications for a mobile and cloud-first world. Applications have moved from the data center to the cloud and users are connecting to their workloads from everywhere, but security has remained anchored to the data center. Zscaler is redefining security by moving it out of the data center and into the cloud.
The Zscaler Cloud Security Platform uses software-defined business policies, not appliances, to securely connect the right user to the right application, regardless of device, location, or network. Zscaler offers two service suites. Zscaler Internet Access™ scans every byte of traffic to ensure that nothing bad comes in and nothing good leaks out. Zscaler Private Access™ offers authorized users secure and fast access to internal applications hosted in the data center or public clouds—without a VPN.
Zscaler services are 100% cloud delivered and offer the simplicity, enhanced security, and improved user experience that traditional appliances or hybrid solutions are unable to match. Used in more than 185 countries, the Zscaler multi-tenant, distributed security cloud protects thousands of customers from cyberattacks and data loss, enabling customers to embrace the agility, speed, and cost containment of the cloud—securely.
Come and join our team and be part of this exciting transformation to cloud-based security.
As a Big Data Engineer, you will work on building the next generation of Zscaler's security analytics platform. You will play a crucial role in building a platform to collect and ingest several billion (and growing) log events from Zscaler's globally distributed security infrastructure and provide actionable insights to customers and Zscaler's security researchers.
- You will design and create multi-tenant systems capable of loading and transforming a large volume of structured and semi-structured fast moving data
- Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users
- Build Data Pipelines
- Run ETL into Hadoop/Elastic Search
- 5+ years of experience in Python or Java development a must (Strong Scala would skills would be acceptable as well)
- 5+ years experience in application big data development (Spark, Kafka, Storm, Kinesis, & building data pipelines)
- Ability to troubleshoot and find complex performance issues with queries on the Spark platform (Spark SQL)
- Familiarity with implementing services following REST model
- Excellent interpersonal, technical and communication skills
- Ability to learn, evaluate and adopt new technologies
- Bachelor's Degree in computer science or equivalent experience
- Experience working with data processing infrastructure
- Experience with data serialization techniques and data stores for persisting events
Additional information about Zscaler (NASDAQ : ZS ) is available at http://www.zscaler.com.
All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.