- Must have Atleast 3 to 6 years of relevant experience as Big Data Engineer
- Min 3 years of hands-on experience into Scala with Spark
- Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala.
- Strong programming experience of building applications platforms using Scala.
- Experienced in implementing Spark RDD Transformations, actions to implement business analysis.
- Good to have Should have hands on experience in Python.
- An efficient interpersonal communicator with sound analytical problem solving skills and management capabilities.
- Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies.
- Good knowledge on agile methodology of Software development.
Keyskills: Big Data Hive Software development Scala Hadoop HDFS Spark MapReduce