BigData Developer (Python, PySpark, Snowflake), Meal-kit

Big Data Development Kraków, Poland Saratov, Poland St.Petersburg, Poland Kharkiv, Poland Lviv, Poland Belgrade, Poland Kyiv, Poland Wroclaw, Poland


1. Project description
This squad will be working as a unity and would follow a project-based approach. The goal of the Data Solutions Tribe is to accelerate the onboarding of domains into our new Data Platform/Data Science Platform.
As a Staff Data Engineer, you will be responsible for spreading best data engineering practices within the organization and in the team. You will support us in designing, building, and maintaining our data infrastructure, with special focus on flexibility and scalability. You will have the opportunity to work on challenging data-related problems and to have a great impact on the organization.
2. Client description
HelloFresh is on a mission to change the way people eat, forever!
Since our 2011 founding in Europe’s vibrant tech hub, Berlin, we’ve become the world's leading meal kit provider, delivering to over 4.2 million households worldwide in 14 countries across 3 continents.
Our Engineering, Data, Product and Security teams are located in Berlin, New York, etc., and are critical to what we do. From procurement tools to conversion rate optimization, live pricing tools, payment services and add-on upselling features, we work on a wide variety of challenging problems. The result is a high output where we constantly build and release features and engines that make our business thrive, allowing us to deliver real financial impact.
Our more than 7,000 employees from over 70+ nationalities are the heart and soul of our diverse, fast-paced and dynamic environment where innovation and smart, fast action is encouraged. We employ individuals based on their ability to perform a job rather than on the basis of their: race, national origin, color, caste, social origin or position, gender, gender expression, sexual orientation, religion, age, disability, political opinion, marital status or any other characteristic.
We will encourage you to make an immediate impact in your area of work as well as empower you to grow your career!
3. Details on tech stack
Python, AWS, PySpark, SQL, AirFlow
4. Min requirements to the candidate
Python, PySpark, SQL
5. Nice to have requirements to the candidate
AWS, AirFlow
6. Do you have client interview?
7. Current team size, their locations and whether team is going to grow?
5 Engineers
  • Staff Data Engineer x1
  • Lead Data Architect x1
  • Senior Data Engineer x1
  • Data Engineer x2
8. What tasks the candidate will work with?
  • Supporting us with moving to a data mesh architecture
  • Defining standard working methods and processes to be respected by all chapter members, e.g., best practices in developing data pipelines, data architectures, testing, and deploying changes, documentation standards
  • Enabling quick and accurate decision-making within the teams by facilitating discussions
  • Creating and coordinating communities
  • Coaching and mentoring data engineers in order to support their growth
  • Setting and spreading best practices for Data Engineering across the organization
  • Providing solutions for complex business problems, enabling insights that can empower better decision-making
  • Collaborating with our Senior Director of Data on strategic projects
9. Selling points for this demand (what is good about project / client / team / etc)
German client (large company);
Very comfortable working hours (without late evening calls);
Working directly with the customer's internal team;
Possibility to switch to another project within an account;
Modern tech stack;
Atypical and interesting daily tasks.
10. Who from DMs to invite to NTIs?
Mark Zuev, Roman Ivanov
11. Who is a Tech Lead on the project?