Your browser does not support javascript! Please enable it, otherwise web will not work for you.

AWS Data -T3 -Hyd Professional @ Virtusa

Home > Software Development

 AWS Data -T3 -Hyd Professional

Job Description


Job description
Job Summary:We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment.Design and implement ETL workflows using AWS Glue, Python, and PySpark.Develop and optimize queries using Amazon Athena and Redshift.Build scalable data pipelines to ingest, transform, and load data from various sources.Ensure data quality, integrity, and security across AWS services.Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions.Monitor and troubleshoot ETL jobs and cloud infrastructure performance.Automate data workflows and integrate with CI/CD pipelines.Required Skills & Qualifications:Hands-on experience with AWS Glue, Athena, and Redshift.Strong programming skills in Python and PySpark.Experience with ETL design, implementation, and optimization.Familiarity with S3, Lambda, CloudWatch, and other AWS services.Understanding of data warehousing concepts and performance tuning in Redshift.Experience with schema design, partitioning, and query optimization in Athena.Proficiency in version control (Git) and agile development practices. Qualification Job Summary:We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment.Design and implement ETL workflows using AWS Glue, Python, and PySpark.Develop and optimize queries using Amazon Athena and Redshift.Build scalable data pipelines to ingest, transform, and load data from various sources.Ensure data quality, integrity, and security across AWS services.Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions.Monitor and troubleshoot ETL jobs and cloud infrastructure performance.Automate data workflows and integrate with CI/CD pipelines.Required Skills & Qualifications:Hands-on experience with AWS Glue, Athena, and Redshift.Strong programming skills in Python and PySpark.Experience with ETL design, implementation, and optimization.Familiarity with S3, Lambda, CloudWatch, and other AWS services.Understanding of data warehousing concepts and performance tuning in Redshift.Experience with schema design, partitioning, and query optimization in Athena.Proficiency in version control (Git) and agile development practices.

Job Classification

Industry: Banking
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Virtusa
Location(s): Hyderabad

+ View Contactajax loader


Keyskills:   python amazon redshift version control pyspark agile schema continuous integration performance tuning glue data warehousing aws glue git lambda expressions cloud infrastructure data warehousing concepts athena aws etl amazon cloudwatch

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Data Engineer (Azure Purview)

  • Capgemini
  • 6 - 11 years
  • Hyderabad
  • 3 days ago
₹ Not Disclosed

Java Aws Developer

  • Cognizant
  • 6 - 10 years
  • Hyderabad
  • 4 days ago
₹ Not Disclosed

Openstack SE Ops professional

  • Capgemini
  • 3 - 7 years
  • Noida, Gurugram
  • 4 days ago
₹ Not Disclosed

MREF (Tririga) professional

  • Capgemini
  • 3 - 6 years
  • Bengaluru
  • 4 days ago
₹ Not Disclosed

Virtusa