About the role:
- Design, build, maintenance, and production support of big data pipelines and Hadoop ecosystem
- Recognize the big picture and take initiative to solve the problem and improve design.
- Being aware of current big data technology trends factoring this into current design and implementation.
- Document architecture, design and present it to the stakeholders
- Identifies, recommends, coordinates, deliver timely knowledge to the globally distributed teams regarding technologies, processes, and tools
- Proactively identify and communicate roadblocks.
About you:
- BE, Masters degree in Computer Science, or related technical discipline, or equivalent practical experience
- Must have 2+ years of software development experience with large-scale distributed systems and client-server architectures and technologies such as HDFS/S3, MapReduce, Spark, Kafka/Kinesis, Hive, SQL and No-SQL datastores.
- Must have 5+ years of experience in Java and software design principles patterns, unit testing, performance engineering.
- Must have experience in REST APIs, Spring Boot applications.
- Strong problem-solving skills and exhibits strong Computer Science fundamentals
- Experience with AWS CloudFormation, Cloudwatch, SQS, Lambda is a plus.
- Experience working with Linux operating system
- Experience working in the Agile environment
- Excellent communication skills
Keyskills: Computer science Software design Backend Linux Production support Agile Unit testing big data Distribution system SQL