Principal SWE, Big Data Engineering
Position: Principal SWE, Big Data Engineering
Location: San Jose, CA / Atlanta, GA
For over 10 years, Zscaler has been disrupting and transforming the security industry. Our 100% purpose built cloud platform delivers the entire gateway security stack as a service through 150 global data centers to securely connect users to their applications, regardless of device, location, or network in over 185 countries protecting over 4,500 companies and 100 Million threats detected a day.
We work in a fast paced, dynamic and make it happen culture. Our people are some of the brightest and passionate in the industry that thrive on being the first to solve problems. We are always looking to hire highly passionate, collaborative and humble people that want to make a difference.
The Principal Software Engineer, Data Engineering will create the next generation of Zscaler's security analytics platform. The candidate will help build a platform to collect and ingest several billion (and growing) log events from Zscaler's globally distributed security infrastructure and provide actionable insights to customers and Zscaler's security researchers.
- Design and build multi-tenant systems capable of loading and transforming large volumes of structured and semi-structured fast moving data
- Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users
- Implement measures to address data privacy, security and compliance
- Work with product management, marketing, security research teams to identify requirements and evolve data architecture
- 12+ years of hands-on experience working with software development and enterprise data warehouse solutions
- Must be proficient with coding in Java/Scala, some Python would be a plus
- Must have expertise with the Spark platform - building a Spark infrastructure
- Must have expertise with Hadoop architecture (Ability to set-up a Hadoop cluster from scratch and maintain, troubleshoot and tune by reviewing logs from various Hadoop services
- 3+ years of experience in building high performance data processing infrastructure taking into account concurrency, latency and efficiency by profiling, reviewing logs etc
- 2+ years working with query engines such as Presto, Hive or Spark SQL preferred
- Identify the right kind of data serialization techniques and data stores for persisting events
- Strong understanding of the Hadoop stack - HDFS, map-reduce, YARN/Mesos, Zookeeper
- Working with data processing frameworks such as Spark, Kafka, Storm, Elastic search for 2+ years
- Experience architecting systems following the REST architectural style
- Excellent interpersonal, technical and communication skills
- Ability to learn, evaluate and adopt new technologies
- Ability to prioritize multiple tasks in a fast-paced environment
- Bachelor's Degree in computer science or equivalent experience
- Experience working with AWS - EC2, S3, EMR, Redshift, etc.
- Experience working with Spark
- Basic understanding of statistical analysis.
- Practical experience with the Scala programming language
- Application of machine learning to security log analytics
People who excel at Zscaler are smart, motivated and share our values. Ask yourself: Do you want to team with the best talent in the industry? Do you want to work on disruptive technology? Do you thrive in a fluid work environment? Do you appreciate a company culture that enables individual and group success and celebrates achievement? If you said yes, we’d love to talk to you about joining our award-winning team.
Learn more at zscaler.com or follow us on Twitter @zscaler. Additional information about Zscaler (NASDAQ : ZS ) is available at http://www.zscaler.com. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.