We are looking for engineers with real passion for data pipelines with actual hands-on experience developing data application on Kafka. You would be required to work with our data science team on development of several data applications.
Mandatory
1. Must have hands-on experience working on Kafka connect using schema registry in a very high volume environment
2. Must have worked with JDBC connectors and APIs
3. Must have worked on Kafka topics, Kafka brokers, zookeepers, and Kafka Control center
4. Must have worked on on AvroConverters, JsonConverters, and StringConverters
5. Must have worked on Debezium source/sink connectors with CDC implementation
6. Must have worked on producer API custom logics
7. Must have worked on consumer API complex transformations
8. Must have worked on setting optimal configuration for broker, topics, producer, consumer, connect, stream and admin
9. Must have deployed atleast ten source/sink connectors in production
10. Must have worked on distributed computing, parallel processing and large-scale data management
11. Must have integrated Kafka with RabbitMQ/ Reddis/ AWS SQS
Preferred
1. Good to have worked on Admin API, Connect API and Stream API
2. Good to have worked on development of data ingestion platform
3. Good to have worked on vinaigrette KSQL, KStream
4. Good to have worked on confluent connectors
5. Good to have build connectors from scratch using Java