Design, develop, test, deploy and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions etc.
Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs.
Develop complex data processing workflows using PySpark on AWS cloud platform.
Troubleshoot issues related to data pipeline failures and optimize performance for improved efficiency.
Job Requirements :
5-12 years of experience in Data Engineering with expertise in AWS technologies (S3, Lambda).
Strong proficiency in Python programming language with experience working with PySpark.
Experience designing and developing scalable data pipelines using AWS services like Step Functions.
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Data Science & AnalyticsRole Category: Data Science & Analytics - OtherRole: Data Science & Analytics - OtherEmployement Type: Full time