Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Job Opportunity | AWS Data Engineer | Fresh @ Fresh Gravity Software

Home > Programming & Design

 Job Opportunity | AWS Data Engineer | Fresh

Job Description

Greetings from Fresh Gravity!

Title: Consultant, Sr.Consultant
- AWS Data Engineer
Job Location:Pune, Bangalore Kolkata
Apply On-79*******2@jo*s.workablemail.com

Technical Skills : Job Overview

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing data and data pipeline architecture for our clients, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The data engineer must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.


Key Responsibilities include:

  • Create and maintain optimal data pipeline architecture for our clients.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Perform technology stack evaluation and suggest optimal stack for the client.
  • Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and 'Big data' technologies.
  • Experience in development of data pipelines to integrate data from Azure/AWS/GCP.
  • Work with different stakeholders including the executive, product, data and design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Should be able to keep the data separated and secure as required by the client.
  • Develop and manage data processes to ensure that data is available and usable.
  • Create and automate data pipelines and platforms.
  • Manage and monitor data quality via automated testing frameworks (like Data Driven Testing).
  • Actively manage risks to data and ensure there is a data recovery plan.
  • Build data repositories such as: data warehouses, data lakes, and operational data stores, etc.
  • Expert level experience with Python and any other programming languages like: Scala, or Java.
  • Experience in development of data pipelines and data lake implementation in any: AWS/Azure/GCP.
  • Expertise in any of the cloud platforms/services - AWS cloud services (S3, EC2, EMR, RDS, Redshift, Glue) or MS cloud services: BLOB, Data Lakes, Event Hubs, Parser, API Manager.
  • Experience in Snowflake or Google Big Query is a plus.
  • Expertise in Hive or in any NoSQL databases like HBase, Cassandra.
  • Experience with any data pipeline and workflow management tools like Snap Logic, Stream Sets, Azkaban, Luigi, Airflow, etc.
  • Experience with any stream-processing systems like Storm, Spark-Streaming, etc.
  • Exposure in at least any one of the following technologies: Hadoop ecosystem (HDFS, MapReduce), Spark, WSO2, Oozie, Hortonworks Data Platform (HDP).
  • Experience with Informatica Big Data Management, Intelligent Data lake platforms is a plus.
  • Strong analytic skills related to working with unstructured datasets.
  • Experience in creating secure, performant, and well-modeled data stores.
  • Knowledge in common analytical platform architectural patterns (Star Schema, data integration patterns, ABAC, data quality frameworks etc.).
  • Knowledge in Data lake design patterns and technology options (schema on read, metadata capture, search framework etc.).


Responsibilities include:

  • Create and maintain optimal data pipeline architecture for our clients.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and 'Big data' technologies.
  • Experience in development of data pipelines to integrate data from AWS, Azure and GCP).
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the executive, product, data and design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Should be able to keep the data separated and secure as required by the client.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Requirement:
  • We prefer candidates with BS, MS degree in Computer Engineering.
  • Should have around 5+ years of work or research experience in software development.
  • Experience with Big data technologies (Hadoop, Spark, Kafka etc.)
  • Experience with relational SQL (Oracle) and NoSQL databases (Postgres, Cassandra, MongoDB etc.)
  • Experience with data pipeline and workflow management tools: Snap Logic, Stream Sets, Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/ object function scripting languages like Python, Java, C++, Scala, etc.
  • Should have built processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience in working on the cloud (iPaas / SaaS)
  • Strong analytic skills related to working with unstructured data-sets
  • Experience on Snowflake will be a plus.
  • Should have data modelling experience
  • Should have been part of at least one data warehousing project.
  • Experience on Cloud migration is a plus

Job Classification

Industry: IT-Software, Software Services
Functional Area: IT Software - System Programming,
Role Category: Programming & Design
Role: Programming & Design
Employement Type: Full time

Education

Under Graduation: B.Tech/B.E. in Any Specialization
Post Graduation: Any Postgraduate in Any Specialization, Post Graduation Not Required
Doctorate: Doctorate Not Required

Contact Details:

Company: Fresh Gravity Software Services, India Pvt. Ltd
Location(s): Pune

+ View Contactajax loader


Keyskills:   aws redshift talend s3 kinesis snowflake glue informatica reltio DevOps Cloud Security

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Fresh Gravity Software

Fresh Gravity is at the cutting-edge of digital transformation. We drive digital success for our clients by enabling them to adopt transformative technologies that make them nimble, adaptive and responsive to the changing needs of their businesses. Our unparalleled expertise in helping clients in di...