Job Description:
The MLOps Engineer will play a crucial role in ensuring the accuracy, performance, and stability of machine learning models used for forecasting and analytics. This role involves continuous monitoring, maintenance, and
improvement of ML pipelines, Docker images, and data synchronization processes to support reliable predictions and decision-making.
An ideal candidate should have:
Overall experience of more than 6 Years, with experience in MLOps space is a must
Design and implement cloud solutions, build MLOps on cloud (AWS, Azure, or GCP),
Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools;
Data science model review, run the code refactoring and optimization,
containerization, deployment, versioning, and monitoring of its quality.
Data science models testing, validation and tests automation.
Programming languages like Python, Go, Ruby or Bash, good understanding
of Linux, knowledge of frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
General understanding of AGILE Project Delivery process
As an MLOPs Engineer, the resource will do:
Accuracy Analysis:
o Perform in-depth analysis of forecasting model accuracy.
o Identify and diagnose issues causing low accuracy.
o Collaborate with data scientists to understand and rectify model
shortcomings.
Performance Comparison:
o Regularly compare model performance against the previous month's
results.
o Identify trends and areas for improvement in forecasting accuracy.
Pipeline and Docker Maintenance:
o Ensure that the production pipeline and Docker images are running
with the latest model code.
o Perform regular checks to maintain the stability of the production
environment.
Development-Production Alignment:
o Align the development environment with the production
environment every two weeks to ensure consistency.
Bug Fixing:
o Investigate and fix code bugs related to model deployment and data
synchronization.
Data Synchronization:
o Maintain data synchronization processes to ensure data consistency
across systems.
Accuracy Comparison:
o Implement accuracy comparison between past forecasts and realized
actuals to assess model performance.
Affiliate Support:
o Collaborate with affiliate teams to understand and address their
needs related to predictions and model outputs.
Understanding Predictions:
o Provide insights and explanations for model predictions to relevant
stakeholders.
Model Performance Deep Dive:
o Conduct in-depth analysis of model performance, including feature
importance and model behavior.
Monthly Export:
o Export results in a wide format for monthly reporting and analysis.
Eventing:
o Upload and process monthly CSV data into the database.
o Implement eventing logic to derive insights from data.
o Continuously improve eventing code logic for better results.
Qualification

Keyskills: Ci Cd Pipeline ML ops Aws machine learning operations docker Machine Learning