Data Infrastructure Team Lead

R&D Prague, Czech Republic


Description

Sizmek is the largest independent buy-side advertising platform that creates impressions that inspire. Sizmek provides powerful, integrated solutions that enable data, creative, and media to work together for optimal campaign performance across the entire customer journey. Sizmek operates its platform in more than 70 countries, with local offices in many countries providing award-winning service throughout the Americas, EMEA, and APAC, and connecting more than 20,000 advertisers and 3,600 agencies to audiences around the world.

We are currently looking for an experienced Manager to lead our Data Infrastructure team located in Prague and remote locations.

In this role, you will be in charge of our data stores and pipelines that include data from DSP, DMP, and Ad Server. You will be running projects operating with multi-petabytes data sets such as events from real-time bidding traffic or from serving media content, ingesting and processing hundreds  of terabytes of data every day and providing them to other teams within the company as well as building analytics and insight for our customers.

This position is for a dynamic person who likes challenges. An ideal candidate is an experienced technology manager with strong background in data processing and solid knowledge of Big Data platforms & frameworks.


Responsibilities:

  • Lead an agile engineering team working with developers, program / product managers, and other team leads throughout development cycle
  • Drive development of fault-tolerant, scalable, batch & real-time distributed data processing systems
  • Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects
  • Keep constant focus on optimizing performance and resource utilization over large installs in production clusters
  • Ensure constant evolution to newer tech stacks and architectures, while supporting existing platforms and applications
  • Facilitate global teams across our engineering locations
  • Work in fast-paced engineering environment
  • Be a high energy, creative, and resourceful person oriented on team results

Requirements:

  • 5+ years in a similar role, including at least 2+ years using agile software development methodologies
  • Experience in building high performing, scalable and distributed Big Data systems
  • Experienced with Big Data ecosystem and technologies / frameworks such as Hadoop, Map-Reduce, Spark, Hive, Storm, Flink, Kafka, HBase, OpenTSDB, Couchbase, Vertica, etc.
  • Experience and enthusiasm for distributed data processing at scale, eagerness to learn new things
  • Exposure to complete software development lifecycle from inception to production and monitoring
  • Good understanding of Agile and CI/CD methodologies
  • Strong leadership skills, ability to mentor and develop people on the team
  • Excellent communication and collaboration skills

What We'll Consider An Added Bonus:

  • Experience with configuration management tools such as Puppet, Salt, Ansible
  • Experience with debugging and tuning JVM garbage collection and memory problems

Education:

  • BS / MS / PhD in Computer Science