Hadoop Developer required at Mohali/ Bangalore/ Pune/ Gurugram Skill Set: Impala, hadoop, python Skill to Evaluate: Hadoop,Impala,sql,python, hadoop Experience:3 to 6 Years Job Description: We seek a strong Data Engineer to develop a semantic model in our data lake, centralizing transformation workflows currently managed in Qlik. The ideal candidate will have data modeling, ETL pipeline development, and performance optimization expertise, enabling seamless data consumption for analytics and reporting. Key Responsibilities: Translate Qlik-specific transformation logic into Hadoop/Impala-based processing. Develop modular, reusable transformation layers to support scalability and flexibility. Optimize the semantic layer for high performance and seamless integration with dashboarding tools. Design and implement ETL pipelines using Python to streamline data ingestion, transformation, and storage. Collaborate with data analysts, BI teams, and business stakeholders to align the semantic model with reporting requirements. Monitor, troubleshoot, and enhance data processing workflows for reliability and efficiency. Required Skills & Qualifications: Strong experience in Hadoop, Impala, and distributed data processing frameworks. Proficiency in Python for ETL pipeline development and automation. Proficiency in SQL and performance tuning for large-scale datasets. Knowledge of data modeling principles and best practices for semantic layers. Familiarity with Qlik transformation logic and the ability to translate it into scalable processing. Knowledge of big data performance tuning and optimization strategies. Strong problem-solving skills and ability to work in a fast-paced environment. Interested candidates may forward their update resume to hidden_email,
Employement Category:
Employement Type: Full time Industry: IT Services & Consulting Role Category: Not Specified Functional Area: Not Specified Role/Responsibilies: Hadoop Developer