NLP Engineer

Delivery Multiple, United States


Description

We are seeking a Engineer with end-to-end expertise in designing, developing, deploying, and maintaining machine learning solutions. This role requires a hands-on engineer who can work across data engineering, model development, MLOps, and API integration to build scalable AI-driven applications.
 
Location – Remote
 
Key Responsibilities:
• Data Engineering & Processing:
• Design and implement scalable ETL/ELT pipelines for structured and unstructured data.
• Clean, normalize, and preprocess data for ML models.
• Work with Graph Databases (Neo4j, AWS Neptune) and Vector Databases (Weaviate, Pinecone, FAISS).
• Model Development & Fine-Tuning:
Train and fine-tune LLMs and deep learning models using Hugging Face, PyTorch, TensorFlow.
• Implement retrieval-augmented generation (RAG) for knowledge-based AI.
• Optimize model performance for efficiency and scalability.
• MLOps & Model Deployment:
• Develop CI/CD pipelines for ML model training and deployment using MLflow, Kubeflow, SageMaker.
• Deploy models as APIs using FastAPI, Flask, or gRPC.
• Automate model monitoring, drift detection, and retraining.
• Application & API Development:
• Build APIs and microservices for AI applications.
• Integrate ML models with LangChain for AI-powered applications.
• Implement scalable solutions in AWS, GCP, or Azure.
• Performance Optimization & Security:
• Optimize model inference speed and reduce cloud costs.
• Ensure data security and compliance (HIPAA, GDPR where applicable).
Required Skills:
• Machine Learning & AI:
• Strong background in ML, deep learning, LLMs, NLP.
• Experience with Transformer models (BERT, GPT, T5, LLaMA, etc.).
• Programming & Development:
• Proficiency in Python (PyTorch, TensorFlow, Scikit-Learn).
• Strong experience in APIs and microservices (FastAPI, Flask, gRPC).
• Data & Infrastructure:
• Experience with Graph Databases (Neo4j, AWS Neptune).
• Knowledge of Vector Databases (Weaviate, Pinecone, FAISS).
• Familiarity with Snowflake, PostgreSQL, NoSQL databases.
• MLOps & Cloud:
• Experience with MLflow, Kubeflow, SageMaker for model lifecycle management.
• Proficiency in Docker, Kubernetes, Terraform.
• Strong understanding of AWS/GCP/Azure services.
• AI Application Development:
• Experience integrating AI models with LangChain for intelligent applications.
• Understanding of retrieval-augmented generation (RAG) workflows.