Job Description Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet) Python programming language is mandatory. PySpark Excellent with SQL Excellent with Airflow is a plus. Good To Have Airflow Good aptitude, strong problem-solving abilities, and analytical skills, ability to take ownership as appropriate. Should be able to do coding, debugging, performance tuning and deploying the apps to Prod Ability to work on the sprint stories to completion along with Unit test case coverage. Experience working in Agile Methodology Excellent communication and coordination skills Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools Must be able to integrate quickly into the team and work independently towards team goals Role & Responsibilities Take the complete responsibility of the sprint stories" execution. Be accountable for the delivery of the tasks in the defined timelines with good quality Follow. the processes for project execution and delivery. Follow agile methodology. Work with the team lead closely and contribute to the smooth delivery of the project Understand/define the architecture and discuss the pros-cons of the same with the team Involve in the brainstorming sessions and suggest improvements in the architecture/design Work with other team leads to get the architecture/design reviewed Work with the clients and counter-parts (in US) of the project Keep all the stakeholders updated about the project/task status/risks/issues if there are any (ref:hirist.tech
Employement Category:
Employement Type: Full time Industry: Others Role Category: Others Functional Area: Not Applicable Role/Responsibilies: Big Data Engineer - Hadoop/Python