Member of Technical Staff (Data Engineer - Spark, Hadoop)
- Analyze user needs & develop high-quality technical software solutions for middle-tier and information integration layer of product, including requirements gathering, design, modeling, development, testing, deployment, and documentation.
- Take ownership of design and development of enterprise-scale data pipelines within a modern data management framework collaborating with other stakeholders.
- Develop deep understanding of various platform modules, including business domain knowledge. Demonstrate the end-to-end scenarios/use-cases for these platform modules.
- Push the boundaries for our platform in both technology architecture, ease of developing features/products and extensibility.
- Develop new technology solutions to integrate existing/new data assets or solve business problems in our products in a scalable manner.
- Collaborate with the team to design development standards and methodologies.
- Ensure engineering process is followed for each release supported by epic/story grooming, estimation, design specs, unit/integration tests, code reviews etc.
- Work with management and technical support to swiftly address any high priority issues and release fixes.
- Build team strength by knowledge sharing and providing challenging opportunities to improve/extend skills.
- 3+ years of relevant software development experience.
- 2+ years of hands-on experience with developing data warehouse solutions and data products.
- Strong hands-on experience with Apache Spark programming and other big data tech & Hadoop ecosystem like Presto, Hive
- Good understanding of distributed data processing concepts like batch/incremental processing, data partitioning, bucketing, distributed joins and aggregation, Map/Reduce, file formats etc..
- Strong experience with programming languages like Java, Python etc.
- Familiarity with AWS services
Nice to have:
- Experience with Agile methodologies a plus.
- Experience working with Kafka, REST APIs, Streaming APIs, or other Data Ingress techniques like upload/download methods such as SFTP or via browser, web crawlers etc.
- Familiarity with other cloud vendor services, like Azure, GCS technologies..
- BE / BTech or ME/MTech
- Good understanding of enterprise software product development and SDLC
- A quick learner, self-motivator and ability to work in a team environment (offshore and onshore)
- Ability to work on aggressive schedules
- Strong problem-solving acumen
About Model N
Model N enables life sciences and high-tech companies to drive growth and market share, minimizing revenue leakage throughout the revenue lifecycle. With deep industry expertise and solutions purpose-built for these industries, Model N delivers comprehensive visibility, insight and control over the complexities of commercial operations and compliance. Our integrated cloud solution is proven to automate pricing, incentive and contract decisions to scale business profitably and grow revenue. Model N is trusted across more than 120 countries by the world’s leading pharmaceutical, medical technology, semiconductor, and high tech companies, including Johnson & Johnson, AstraZeneca, Stryker, Seagate Technology, Broadcom and Microchip Technology. For more information, visit www.modeln.com.
Feel free to submit a general application or sign up for Job Alerts to stay informed about future opportunities. We’re constantly growing and may have something for you later on. Check out our career site to learn more about Model N or view other jobs: https://www.modeln.com/company/careers/