Desired Candidate Profile
Greeting for the day !!!
Hiring for top MNC Company in Navi Mumbai. Please refer the job Description below
Exp: 3-5 years
Notice Period: Immediate to 15 Days Only
Position: Big Data Developer
Location: Mumbai (Airoli)
Job description:
Design and develop scalable and reliable real-time stream processing solutions using Hortonworks Data Flow HDF product suite (Nifi/Kafka/Spark)
Provide expertise and hands on experience working with Kafka brokers and Kafka connectors
Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
Work directly with business partners to translate complex functional and technical requirements for streaming data ingestion solutions into detailed design & implementation plans
Work closely with EA and cross-functional technical resources to devise and recommend solutions based on the understood requirements
Work closely with Platform Engineering team to analyze complex distributed production deployments, and make recommendations to optimize performance
Provide input for capacity planning and sizing of streaming environment (Kafka/Nifi)
Implement application development lifecycle management and industry standard frameworks
Write and produce technical documentation, knowledge base articles
Work with Production Support to assist with troubleshooting service stability, message topic or delivery issues, perform data related benchmarking, performance analysis and tuning.
Perform design & code reviews
Job Requirement:
Minimum 2+ years of experience in development and support of stream processing solutions in Hadoop technologies (Any of platform Cloudera/MapR/Hortonworks)
Minimum 5+ years of software industry and integration solutions development experience.
Expert level knowledge of Kafka and related technologies (Hive, Hadoop, Spark, Storm, Nifi, Zookeeper, Ambary, Ranger)
Proficient understanding of distributed computing principles
Experience with various messaging systems, such as Nifi, Kafka or RabbitMQ
Experience with Big Data ML toolkits, such as Mahout or SparkML, or H2O
Hands on experience with Stream processing using Apache Storm or Spark
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
Knowledge of various ETL techniques and frameworks, such as Flume
Prefer candidate from Banking Industry
Non-technical but must have skills:
Excellent verbal and written communication skills
Effective in fast paced deadline driven projects and should be good in time management skills
Should have good presentation skills
Ability to work independently and strong data analytical skills
Must have self-learning attitude and ability to grasp new technologies
Should have worked with Overseas clients
Strong analytical problem solving ability.
Contact Details:
Keyskills:
Cloudera
Hadoop
Cassandra
Big Data
Flume
HBase
Rabbitmq
Hive
NoSQL
MongoDB
Spark