Key Responsibilities:
Design, implement, and maintain data science pipelines from data ingestion to model deployment.
Collaborate with data scientists to operationalize ML models and algorithms in production environments.
Develop robust APIs and services for ML model inference and integration.
Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools.
Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring.
Work with cloud infrastructure (AWS) for scalable ML deployment.
Manage model versioning, feature engineering workflows, and experiment tracking.
Optimize performance of models and pipelines for latency, cost, and throughput.
Required Qualifications:
Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field.
5+ years of experience in a data science, ML engineering, or software engineering role.
Proficiency in Python (preferred) and SQL
Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch.
Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI.
Solid understanding of data structures, algorithms, and software engineering best practices.Preferred Qualifications:
Experience with containerization and orchestration (Docker, Kubernetes).
Experience working in Agile or cross-functional teams.
Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink).

Keyskills: Machine Learning Python Tensorflow Pytorch Data scientist Artificial Intelligence Natural Language Processing Deep Learning Ml