Job Description Pyspark+SQL Proficient in leveraging Spark for distributed data processing and transformation. Skilled in optimizing data pipelines for efficiency and scalability. Experience with real-time data processing and integration. Familiarity with Apache Hadoop ecosystem components. Strong problem-solving abilities in handling large-scale datasets. Ability to collaborate with cross-functional teams and communicate effectively with stakeholders. Primary Skills Pyspark SQL,
Employement Category:
Employement Type: Full time Industry: IT Services & Consulting Role Category: Not Specified Functional Area: Not Specified Role/Responsibilies: Pyspark Developer Job in Capgemini at Noida