Design & Develop new automation framework for ETL processing
Support existing framework and become technical point of contact for all related teams.
Enhance existing ETL automation framework as per user requirements
Performance tuning of spark, snowflake ETL jobs
New technology POC and suitability analysis for Cloud migration.
Process optimization with the help of automation and new utility development.
Support any batch issue
Support application team teams with any queries
Required Skills
Must be strong in UNIX Shell, Python scripting knowledge
Must be strong in Spark
Must have strong knowledge of SQL
Hands-on knowledge on how HDFS/Hive/Impala/Spark works
Strong in logical reasoning capabilities
Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools
Strong collaboration and communication skills
Must possess strong team-player skills and should have excellent written and verbal communication skills
Ability to create and maintain a positive environment of shared success.
Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor.
Good to have working experience on snowflake & any data integration tool i.e. informatica cloud Primary skills
Apache Hadoop
Apache Spark
Unix Shell scripting
Python
SQL
Good to have skills:
Snowflake/Azure/AWS any cloud
IDMC/any ETL tool
Job Classification
Industry: IT Services & Consulting Functional Area / Department: Engineering - Software & QA Role Category: Software Development Role: Big Data Engineer Employement Type: Full time