Job Description
Dear Tech Aspirants,
Greetings of the day,
We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :-
Walk-In Date: Saturday, 17-May-2025
Time: 9:30 AM - 3:30 PM
Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS
Walk-In Venue:
Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081.
Gogle Map Location: https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8
*Carry your updated CV*
Job Title: AWS Data Engineer (SQL Mandatory)
Location: Hyderabad, India
Experience: 3 to 6 years
We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient.
Roles & Responsibilities:
- Implement & Maintain AWS Data Services: Deploy, configure, and manage AWS Glue and associated workflows.
- Data Processing & Transformation: Clean, process, and transform raw data into structured and usable formats for analytics and machine learning.
- Develop ETL Pipelines: Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline.
- Data Governance & Security: Ensure data quality, integrity, and security in compliance with organizational standards.
- Performance Optimization: Continuously improve data processing pipelines for efficiency and scalability.
- Collaboration: Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility.
- Documentation & Best Practices: Maintain technical documentation and enforce best practices in data engineering.
- Modern Data Transformation: Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines.
- Data Mesh & Governance: Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance.
- Workflow Orchestration: Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies.
- Good hands-on experience with SQL programming.
Technical Requirements & Skills:
- Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS.
- Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation.
- Programming Skills: Proficiency in Python, Scala for data engineering and processing.
- SQL Expertise: Strong knowledge of SQL for querying and managing structured data.
- ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows.
- Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks.
- Orchestration Tools: Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows.
- Data Transformation Frameworks: Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations.
- Data Mesh Concepts: Understanding of data mesh architecture and its role in scaling data across decentralized domains.
- Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment.
Nice to Have:
- AWS Certified Data Analytics Specialty.
- Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker.
- Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming.
Qualifications:
- Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field.
- 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment.
If interested please appear and Come with your updated CV*
Job Classification
Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Walk-ins
Contact Details:
Company: Sparity
Location(s): Hyderabad
Keyskills:
Data Engineering
ETL
AWS
Python
Apache Airflow
SCALA
Aws Glue