Data Engineer - Databricks
Description
Overview
Software Mind is seeking qualified candidates located in Latam to fill the role of Databricks Data Engineer
In addition to a competitive salary rate and a positive work environment committed to delivering high-quality technology solutions, we also offer:
Software Mind is seeking qualified candidates located in Latam to fill the role of Databricks Data Engineer
In addition to a competitive salary rate and a positive work environment committed to delivering high-quality technology solutions, we also offer:
- Flexible schedules and authentic work-life balance
- Opportunities for continuing education
- Social activities per country sponsored by the company
- Birthday celebration
- Payment in US Dollars
About the role:
Our client is a product strategy, design, and development agency that works side-by-side with ambitious companies, brands, and founders to execute their holistic approach to strategy, design, and engineering.
We are seeking a highly skilled and certified Databricks Data Engineer to join a high-impact team tasked with rearchitecting a global-scale data platform for a leading consulting firm. This is a unique opportunity to work at the intersection of cloud engineering, data infrastructure, and platform modernization within a complex, enterprise-grade environment.
Some of the main responsibilities for this role:
- Design, develop, and optimize scalable data pipelines in Databricks, supporting ingestion, processing, and transformation of large volumes of structured and unstructured data.
- Implement and refine medallion architecture (Bronze, Silver, Gold layers) for a highly governed and performant data lakehouse environment.
- Collaborate with cloud engineers, architects, and business stakeholders to modernize and enhance the platform’s data capabilities.
- Optimize performance and cost-efficiency across computer clusters, data models, and storage layers.
- Apply enterprise-grade best practices around data governance, lineage, observability, and security.
- Contribute to architectural decisions for data lakehouse design and support migration of legacy systems to the new platform.
- Build reusable frameworks and tooling to support data operations, data quality, and monitoring at scale.
Job Skills/Requirements
- +90% English written and oral (at least B2 level) with excellent communication skills
- +90% English written and oral (at least B2 level) with excellent communication skills
- 3–4+ years of experience as a Data Engineer, with 2+ years of hands-on Databricks experience in an enterprise environment.
- Proven experience implementing and optimizing the medallion architecture (Bronze, Silver, Gold) at scale.
- Strong expertise in Apache Spark, Delta Lake, and PySpark.
- Deep knowledge of data modeling, ETL/ELT pipeline development, and performance tuning.
- Experience working with large-scale data lakes and lakehouse architectures in a cloud-native environment, preferably on AWS.
- Proficient in SQL and scripting languages (e.g., Python).
- Experience with data governance, cataloging, and observability tools (e.g., Unity Catalog, Monte Carlo, Great Expectations, etc.) is a plus.
- Comfortable working in Agile environments with strong collaboration and communication skills.
Preferred Qualifications
- Prior experience delivering platform modernization projects in large, matrixed organizations.
- Experience integrating Databricks with enterprise cloud services (e.g., S3, Glue, Redshift, Snowflake, Kafka, etc.).
- Familiarity with CI/CD, DevOps for data engineering, and Infrastructure as Code practices.
Apply today to learn more about this exciting opportunity. We are actively interviewing now for this position.