Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Principal Data Engineer @ Hewlett Packard

Home > Software Development






 Principal Data Engineer

Job Description

Principal Data Engineer
This role has been designed as Onsite with an expectation that you will primarily work from an HPE office.
:
Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the Intelligent Edge - and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what s next for you.
Principal Data Engineer (Architect Level)
About the Role
We are looking for a highly skilled Principal Data Engineer with strong architectural expertise to design and evolve our next-generation data platform. You will help to define the technical vision, build scalable and reliable data systems, and guide the long-term architecture that powers analytics, operational decision-making, and data-driven products across the organization.
This role is both strategic and hands-on . You will evaluate modern data technologies, define engineering best practices, and lead the implementation of robust, high-performance data solutions including the design, build, and lifecycle management of data pipelines that support batch, streaming, and near-real-time workloads.
What You ll Do
Architecture & Strategy
  • Own a segnificant portion of the architecture of our data platform, ensuring scalability, performance, reliability, and security.
  • Define standards and best practices for data modeling, transformation, orchestration, governance, and lifecycle management.
  • Evaluate and integrate modern data technologies and frameworks that align with our long-term platform strategy.
  • Collaborate with engineering and product leadership to shape the technical roadmap.
Engineering & Delivery
  • Design, build, and manage scalable, resilient data pipelines for batch, streaming, and event-driven workloads.
  • Develop clean, high-quality data models and schemas to support analytics, BI, operational systems, and ML workflows.
  • Implement data quality, lineage, observability, and automated testing frameworks.
  • Build ingestion patterns for APIs, event streams, files, and third-party data sources.
  • Optimize compute, storage, and transformation layers for performance and cost efficiency.
Leadership & Collaboration
  • Serve as a senior technical leader and mentor within the data engineering team.
  • Lead architecture reviews, design discussions, and cross-team engineering initiatives.
  • Work closely with analysts, data scientists, software engineers, and product owners to define and deliver data solutions.
  • Communicate architectural decisions and trade-offs to technical and non-technical stakeholders.
What you need to bring:
  • 8+ years of experience in Data Engineering, with demonstrated architectural ownership.
  • Expert-level experience with Snowflake (mandatory), including performance optimization, data modeling, security, and ecosystem components.
  • Expert proficiency in SQL and strong Python skills for pipeline development and automation.
  • Experience with modern orchestration tools (Airflow, Dagster, Prefect, or equivalent).
  • Strong understanding of ELT/ETL patterns, distributed processing, and data lifecycle management.
  • Familiarity with streaming/event technologies (Kafka, Kinesis, Pub/Sub, etc.).
  • Experience implementing data quality, observability, and lineage solutions.
  • Solid understanding of cloud infrastructure (AWS, GCP, or Azure).
  • Strong background in DataOps practices: CI/CD, testing, version control, automation.
  • Proven leadership in driving architectural direction and mentoring engineering teams.
Nice to Have
  • Experience with data governance or metadata management tools.
  • Hands-on experience with DBT , including modeling, testing, documentation, and advanced features.
  • Exposure to machine learning pipelines, feature stores, or MLOps.
  • Experience with Terraform, CloudFormation, or other IaC tools.
  • Background designing systems for high scale, security, or regulated environments.
Additional Skills:
Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX)
#india #aruba
Job:
Engineering
Job Level:
TCP_05
HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity .
No Fees Notice & Recruitment Fraud Disclaimer
It has come to HPE s attention that there has been an increase in recruitment fraud whereby scammer impersonate HPE or HPE-authorized recruiting agencies and offer fake employment opportunities to candidates. These scammers often seek to obtain personal information or money from candidates.
Please note that Hewlett Packard Enterprise (HPE), its direct and indirect subsidiaries and affiliated companies, and its authorized recruitment agencies/vendors will never charge any candidate a registration fee, hiring fee, or any other fee in connection with its recruitment and hiring process. The credentials of any hiring agency that claims to be working with HPE for recruitment of talent should be verified by candidates and candidates shall be solely responsible to conduct such verification. Any candidate/individual who relies on the erroneous representations made by fraudulent employment agencies does so at their own risk, and HPE disclaims liability for any damages or claims that may result from any such communication.

Job Classification

Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time

Contact Details:

Company: Hewlett Packard
Location(s): Bengaluru

+ View Contactajax loader


Keyskills:   TCP metadata Claims Data modeling GCP Machine learning Data quality Operations Analytics SQL

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Similar positions

Azure Databricks - 12th May (Tuesday) - Virtual Interview

  • Tata Consultancy
  • 4 - 9 years
  • Noida, Gurugram
  • 22 hours ago
₹ Not Disclosed

Hadoop, Spark, Scala Data Engineer

  • Tata Consultancy
  • 5 - 7 years
  • Hyderabad
  • 1 day ago
₹ 15-22.5 Lacs P.A.

Azure Databricks - 2nd April (Thursday) - Virtual Interview -Pan India

  • Tata Consultancy
  • 5 - 10 years
  • India
  • 1 day ago
₹ Not Disclosed

Big Data Engineer - 28th April (Tues) - Virtual Interview

  • Tata Consultancy
  • 8 - 13 years
  • Hyderabad
  • 1 day ago
₹ Not Disclosed

Hewlett Packard

Hewlett Packard Enterprise is an industry leading Technology Company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world mak...