Your browser does not support javascript! Please enable it, otherwise web will not work for you.

AWS Data Engineer @ Zensar

Home > Data Science & Machine Learning

 AWS Data Engineer

Job Description

Key Responsibilities:


  • Design & Develop Data Pipelines: Build and optimize scalable, reliable, and automated ETL/ELT pipelines using AWS services (e.g., AWS Glue, AWS Lambda, Redshift, S3) and Databricks.
  • Cloud Data Architecture: Design, implement, and support in maintaining data infrastructure in AWS, ensuring high availability, security, and scalability. Work with lake houses, data lakes, data warehouses, and distributed computing.
  • DBT Core Implementation: Lead the implementation of DBT Core to automate data transformations, develop reusable models, and maintain efficient ELT processes.
  • Data Modelling: Build efficient data models to support required analytics/reporting.
  • Optimize Data Workflows: Monitor, troubleshoot, and optimize data pipelines for performance and cost-efficiency in cloud environments. Utilize Databricks for processing large-scale data sets and streamlining data workflows.
  • Data Quality & Monitoring: Ensure high-quality data by implementing data validation and monitoring systems. Troubleshoot data issues and create solutions to ensure data reliability.
  • Automation & CI/CD: Implement CI/CD practices for data pipeline deployment and maintain automation for monitoring and scaling data infrastructure in AWS and Databricks.
  • Documentation & Best Practices: Maintain comprehensive documentation for data pipelines, architectures, and best practices in AWS,  Databricks, and DBT Core. Ensure knowledge sharing across teams.

Skills & Qualifications: Required:

  • Bachelors / masters degree in computer science, Engineering or a related field.
  • 4+ years of experience as a Data Engineer or in a similar role.
  • Extensive hands-on experience with AWS services (S3, Redshift, Glue, Lambda, Kinesis, etc.) for building scalable and reliable data solutions.
  • Advanced expertise in Databricks, including the creation and optimization of data pipelines, notebooks, and integration with other AWS services.
  • Strong experience with DBT Core for data transformation and modelling, including writing, testing, and maintaining DBT models.
  • Proficiency in SQL and experience with designing and optimizing complex queries for large datasets.
  • Strong programming skills in Python/PySpark, with the ability to develop custom data processing logic and automate tasks.
  • Experience with Data Warehousing and knowledge of concepts related to OLAP and OLTP systems.
  • Expertise in building and managing ETL/ELT pipelines, automating data workflows, and performing data validation.
  • Familiarity with CI/CD concepts, version control (e.g., Git), and deployment automation.
  • Having worked under Agile project environment

Preferred:

  • Experience with Apache Spark and distributed data processing in Databricks.
  • Familiarity with streaming data solutions (e.g., AWS Kinesis, Apache Kafka).

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Data Science & Analytics
Role Category: Data Science & Machine Learning
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Zensar
Location(s): Hyderabad

+ View Contactajax loader


Keyskills:   S3 lambda Glue AWS redshift Kinesis Data Engineer pyspark Apache

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ 15-25 Lacs P.A

Similar positions

Data Engineer II

  • Amazon
  • 3 - 8 years
  • Hyderabad
  • 3 days ago
₹ Not Disclosed

Generative AI Engineer

  • Einfochips
  • 3 - 8 years
  • Indore
  • 9 days ago
₹ Not Disclosed

Walk In - Aiml / Genai / Mlops/data Science - Hyderabad Gar

  • Capgemini
  • 4 - 9 years
  • Hyderabad
  • 9 days ago
₹ Not Disclosed

AI Engineer L1

  • Wipro
  • 1 - 5 years
  • Hyderabad
  • 10 days ago
₹ Not Disclosed

Zensar

Company DetailsZensar Technologies