Data Engineer for financial services

02 Data Management India


Description

    ONLY IMMEDITATE JOINERS WILL BE CONSIDERED. JOIN DATE MUST BE ON OR BEFORE APRIL 15th
    Job Description:
    We are looking for a highly skilled and motivated Data Engineer to join our team. This role is critical in designing, developing, and maintaining our data infrastructure to enable efficient data processing, analysis, and reporting. Your expertise in AWS, Snowflake, SQL, and Python will be essential in building and optimizing data pipelines, ensuring the scalability, reliability, and performance of our data ecosystem.

    Key Responsibilities:
    • Collaborate with cross-functional teams to understand data requirements and develop effective data solutions.
    • Design and implement scalable, high-performance data pipelines using Snowflake, SQL, and Python for ETL/ELT processes.
    • Optimize data workflows for efficiency, reliability, and maintainability.
    • Develop and maintain data models to ensure data accuracy, consistency, and integrity.
    • Work with large structured and unstructured datasets to derive actionable insights.
    • Identify and resolve data quality issues using data profiling, validation, and cleansing techniques.
    • Monitor and troubleshoot data pipelines to ensure high availability and accuracy.
    • Continuously improve data automation processes to streamline operations.

    Qualifications & Requirements:
    • Bachelor’s degree in Computer Science, Information Technology, or a related field (Master’s preferred).
    • Strong experience with Snowflake for data warehousing and analytics.
    • Proficiency in SQL for querying and manipulating data.
    • Extensive experience in Python for building data pipelines and automation.
    • Solid understanding of data modeling concepts and database design principles.
    • Familiarity with ETL/ELT techniques, data integration, and transformation processes.
    • Experience with version control systems (e.g., Git) and CI/CD pipelines is a plus.
    • Strong problem-solving skills and ability to work in a fast-paced, collaborative environment.
    • Excellent communication skills to engage with both technical and non-technical stakeholders.
    • Basic domain experience is a plus, particularly in understanding industry-specific data assets and developing testing plans accordingly.

    Preferred Skills:
    • Experience with AWS services such as EC2, S3, CloudWatch, Lambda, etc.
    • Proficiency in Linux scripting and automation using Bash & Python.
    • Hands-on experience with AWS CLI.
    • Experience with CI/CD pipelines for data infrastructure deployment.
    • Expertise in Snowflake, SQL, and Flyway for schema versioning and migration.
    • Familiarity with data quality tools (e.g., Great Expectations).
    • Hands-on experience with ETL tools (e.g., Boomi).
    • Experience with batch processing tools like Apache Airflow.
    RiskSpan is a leading source of analytics, modeling, data and risk management solutions for the Consumer and Institutional Finance industries. We solve business problems for clients such as banks, mortgage-backed and asset-backed securities issuers, equity and fixed-income portfolio managers, servicers, and regulators that require our expertise in the market risk, credit risk, operational risk and information technology domains.
          
    RiskSpan is proud to be an Equal Opportunity/Affirmative Action employer committed to hiring a diverse workforce and sustaining an inclusive culture. Qualified candidates must be legally authorized to be employed in the United States on an unrestricted basis.