Senior Data Scientist, Platform Data Products (Remote)
N3TWORK is a Full Stack Games company – one that creates its own games and also builds technology and services for other gamemakers. Our flagship title Legendary, has generated over $250m in lifetime revenue and sets the standard for Live Events in mobile free-to-play games. We’ve also launched a version of Tetris to the world, along with Funko Pop! Blitz. These games are built on top of the N3TWORK Scale Platform (NSP) which enables us to invest hundreds of millions of mobile marketing dollars to achieve some of the highest ROI and LTV in the industry. NSP is now available to all developers to help grow their games and their businesses.
N3TWORK is headquartered in San Francisco, but is a fully distributed company. So, geographic location in the world is much less relevant to us than your drive and experience. We’re backed by some of the biggest names in venture & technology and we’ve assembled a great team of experienced and energetic N3TWORKers. Are you the next incredible addition to our company?
As a Senior Data Scientist working on Platform Data Products, you will be partnering with marketing and platform leads, project managers, data engineers, and data analysts to drive inflective outcomes that help all products in our portfolio scale more efficiently. You will be accountable for data products at the heart of our business, including lifetime value and retention modeling, generating bid and budget allocation recommendations for marketing campaigns, monitoring and alerting for creative fatigue in ad campaigns. You will also be tackling brand new challenges such as building and maintaining conversion value management models in Apple’s upcoming AppTrackingTransparency framework. This highly impact-oriented role will give you the opportunity to autonomously drive measurable outcomes for the business on a regular basis.
The Ideal Candidate
- You are a competent storyteller, effective and uninhibited in communicating actionable insights to stakeholders. You are willing to engage in spirited dialogue about your work, where you have to defend your methodology and results.
- You are a competent statistician. You can identify the right approach to the problem at hand, implement and train models, and evaluate them. You don’t have to invent them, but you can assimilate and implement state of the art machine learning algorithms from academic publications or open source tools that have adjacent use cases rather than relying 100% on pre-packaged tools.
- You are a competent programmer, preferably in Python or R, fluent in SQL, and are familiar with distributed technologies (e.g. Spark).You are comfortable working in an AWS ecosystem (EC2, S3, Sagemaker, Lambda, etc.) to deploy models to production.
- You are a competent scientist. Your specific background doesn’t matter nearly as much as the fact that you understand the scientific process and think deeply about causality. You generate and test causal hypotheses rigorously and robustly. You don’t jump to conclusions or confuse correlation with causation. You can provide an unbiased estimate of the effect of the models you build on KPIs.
Roles and Responsibilities:
- You will work closely with an interdisciplinary team including user acquisition experts, data analysts, media buyers, project managers, and leadership to define vision and roadmap for data products.
- You will implement highly accurate and useful machine learning models and deploy them in production with support from data engineering.
- You will maintain production models to continuously improve their accuracy, interpretability, and usage by stakeholders.
- You will create data visualizations and dashboards for stakeholders to manage inputs to production models and to observe their results.
- You will regularly communicate the KPI impact of new data products we make and improvements to existing ones to key stakeholders.