Design, implement, and manage DevOps pipelines using Azure DevOps to automate infrastructure provisioning and application deployments.
Implement Infrastructure as Code (IaC) using tools like Terraform, Azure ARM Templates, and Azure CLI.
Manage Azure resources (VMs, Storage, Networking, etc.) and ensure best practices in cloud security and cost optimization.
CI/CD Pipeline Development:
Build and maintain CI/CD pipelines for continuous integration, testing, and deployment of applications and services.
Integrate version control systems (e.g., Git, GitHub) with build and deployment processes to automate code releases.
Data Processing & Databricks:
Design and implement scalable data processing pipelines using Databricks and Apache Spark for big data analytics.
Develop and optimize PySpark scripts for data processing, transformation, and analysis within Databricks environments.
Collaborate with data scientists and analysts to support machine learning model deployment and data processing workflows.
ETL and Data Integration:
Design and implement ETL pipelines for ingesting, transforming, and loading data from various sources into data lakes or data warehouses.
Use Python, SQL, and Databricks to manage large datasets and complex transformations.
Ensure data quality, integrity, and consistency through automated data validation and transformation checks.
Azure Data Factory (ADF):
Create, monitor, and optimize ADF pipelines for orchestrating data movement between on-premises and cloud-based systems.
Integrate ADF with various data sources, including Azure Blob Storage, SQL Databases, Azure Data Lake, and third-party systems.
Troubleshoot and resolve data pipeline issues, ensuring smooth data flow and minimal downtime.
Job Classification
Industry: IT Services & Consulting Functional Area / Department: Engineering - Software & QA Role Category: DevOps Role: DevOps Engineer Employement Type: Full time