Locations: Bengaluru, Kolkata, Pune, Chennai, Hyderabad (Hybrid work mode )
Max budget 55Lpa
Architecture & Design:
Design scalable and secure data architectures using Azure Databricks, Spark, Delta Lake, and MLflow.
Lead the migration from legacy systems to Databricks Lakehouse.
Develop modular, reusable, and production-grade ETL/ELT pipelines.
Implementation & Optimization:
Optimize data ingestion, transformation, and consumption processes.
Tune Databricks jobs for performance and cost-efficiency.
Implement proactive monitoring and maintenance strategies.
Collaboration & Leadership:
Work with data engineers, developers, and business stakeholders to align architecture with business goals.
Mentor teams on Databricks best practices and usage.
Collaborate with cloud architects to ensure robust infrastructure on Azure.
________________________________________
Required Qualifications:
13+ years in data engineering or architecture roles.
5+ years of experience with Databricks and Azure.
Strong expertise in:
Apache Spark, Delta Lake, MLflow
SQL, Python, and/or Scala
Distributed computing and performance tuning
Experience with CI/CD, DevOps, and orchestration tools (e.g., Airflow, Azure Data Factory).
Familiarity with integrating tools like Power BI, Tableau, Kafka, Snowflake.Role & responsibilities

Keyskills: Azure Databricks Pyspark ETL SQL Python
Architecture & Design: Design scalable and secure data architectures using Azure Databricks, Spark, Delta Lake, and MLflow. Lead the migration from legacy systems to Databricks Lakehouse. Develop modular, reusable, and production-grade ETL/ELT pipelines. Implementation & Optimization: O...