Job Description
Meet the Team
We are the Webex Persistence Team within the Webex Service Engineering SRE organization.
Our team designs, builds, and operates highly scalable persistence services powering Webex Teams, Webex Meetings, and the broader Webex Suite. We work across Webex Data Centers and AWS Cloud environments, ensuring high availability, reliability, and performance at a global scale.
As part of this team, you will work on mission-critical infrastructure and platform systems, including data, streaming, and persistence layers, with a strong focus on automation, scalability, and cloud-native engineering.
Your Impact
We are seeking a Senior Kafka DevOps Engineer with strong hands-on experience in Apache Kafka, MirrorMaker, and Kafka Schema Registry, plus 2+ years of DevOps experience.
As a Senior Kafka / DevOps Engineer, you will play a critical role in designing, operating, and optimizing large-scale Kafka supporting Webex services.
You will contribute to the development of robust, high-performing systems and promote technical excellence in infrastructure operations and Kafka.
Key Responsibilities
- Design, build, and manage scalable Kafka-based streaming platforms.
- Configure and operate Kafka MirrorMaker for cross-cluster and cross-region replication.
- Administer and govern Kafka Schema Registry (schema lifecycle, compatibility, versioning).
- Monitor and optimize Kafka clusters for performance, reliability, and throughput.
- Troubleshoot broker, producer, consumer, replication, and latency issues.
- Implement security controls (ACLs, encryption, authentication, authorization).
- Collaborate with application teams for event-driven architecture and best practices.
- Define DR/failover strategies and ensure high availability.
- Establish backup, recovery, and disaster recovery proces ses
- Monitor system health and reliability using tools like Prometheus and Grafana
- Contribute to infrastructure automation using Terraform , Automation by Ansible, and Python.
Minimum Qualifications
- 8+ years of experience in Kafka design, administration, and cluster management.
- Strong knowledge of Kafka architecture,internals, data distribution, partitioning,
- Administer and govern Kafka Schema Registry (schema lifecycle, compatibility, versioning).
- Monitor and optimize Kafka clusters for performance, reliability, and throughput.
- Troubleshoot broker, producer, consumer, replication, and latency issues.
- Implement security controls (ACLs, encryption, authentication, authorization).
- Collaborate with application teams for event-driven architecture and best practices.
- Define DR/failover strategies and ensure high availability.
- Experience with performance tuning, capacity planning, and incident handling.
- Hands-on experience with AWS EC2, VPC, S3, IAM, CloudWatch, EKS/ECS,
- Experience with infrastructure automation and DevOps practices, including CI/CD tools (Jenkins, GitHub Actions, GitLab CI) and Infrastructure as Code tools (Terraform, Ansible), Python
Preferred Qualifications
- Experience working with relational and search databases such as PostgreSQL, OpenSearch, or Elasticsearch
- Hands-on experience with containerization and orchestration technologies such as Docker and Kubernetes
- Exposure to modern data platforms, including data pipelines, streaming systems, or AI/ML and GenAI workloads
Job Classification
Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: DevOps
Role: DevOps Engineer
Employement Type: Full time
Contact Details:
Company: Cisco
Location(s): Bengaluru
Keyskills:
Performance tuning
Automation
Postgresql
Acls
Disaster recovery
Service engineering
Apache
cisco
Capacity planning
Python