Design, develop, test, deploy and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions etc.
Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs.
Develop scalable and efficient data processing workflows using PySpark on AWS cloud platform.
Troubleshoot issues related to data pipeline failures and optimize performance for improved efficiency.
Job Requirements :
4-12 years of experience in Data Engineering with expertise in AWS technologies (S3, Lambda).
Strong understanding of Python programming language and its applications in data engineering.
Experience working with big data technologies like Hadoop/Hive/Pyspark on AWS cloud platform.
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Data Science & AnalyticsRole Category: Data Science & Analytics - OtherRole: Data Science & Analytics - OtherEmployement Type: Full time