Job Description: PySpark Developer (AWS & SQL) Were looking for a PySpark Developer with strong experience in AWS and SQL to build and optimize data pipelines in a cloud environment. Key Responsibilities: Develop ETL workflows using PySpark Build and manage data pipelines on AWS (S3, Glue, EMR, Lambda) Write and optimize SQL queries for data transformation and reporting Ensure data quality, performance, and reliability Collaborate with data engineers, analysts, and architects Skills Required: Proficiency in PySpark and SQL Experience with AWS cloud services Strong problem-solving and debugging skills Familiarity with data lake and data warehouse concepts,
Employement Category:
Employement Type: Full time Industry: IT Services & Consulting Role Category: Not Specified Functional Area: Not Specified Role/Responsibilies: Pyspark+AWS & SQL Developers