As a SDE at NetApp India R&D division, you will be responsible for development, Validation, implementation, and Operations of software across Big Data Engineering across both cloud and Onprem. You will be part of a highly skilled technical team named NetApp Active IQ.
Active IQ Platform/Datahub process 10 trillion data points per month with around 25 PBs of data in its data sources. This platform enables advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage and then provides the insights and actions to make it happen. We call this actionable intelligence and it leads to higher availability, improved security, and simplified administration
Your focus area will be around Data engineering related projects as a Data Engineer is responsible for development and operations of the microservices in Active IQs Big data platform.
This position requires an individual to be creative, team-oriented, technology savvy, driven to produce results and demonstrates the ability to working across teams
Your Responsibility
Build big data platform and Big Data solutions primarily based on open-source technologies that is fault-tolerant & scalable.
Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community.
Deploy and monitor products on both Cloud and Onprem platforms
Work on technologies related to NoSQL, SQL and InMemory platform(s)
Develop and implement best-in-class monitoring processes to enable data applications meet SLAs
You have a deep interest and passion for technology.
You love writing and owning codes and enjoy working with people who will keep challenging you at every stage.
You have strong problem solving, analytic, decision- making and excellent communication with interpersonal skills.
You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities
Strong in CS fundamentals, Unix shell scripting and Database Concepts
Good understanding of Data processing pipeline implementation, Kafka, Spark, NOSQL DB's especially MongoDB and SQL
Familiarity with GenAI, Agile concepts, Continuous Integration and Continuous Delivery
Working Knowledge in Linux Environment with containers (Docker & Kubernetes) is a plus
1 to 2 years of Experience with Java, and Python to write data pipelines and data processing layers.

Keyskills: kubernetes continuous integration python data processing interpersonal skills problem solving linux internals sql docker nosql analytics continuous delivery unix shell scripting java cs fundamentals spark database creation kafka linux agile mongodb communication skills