Design,develop, and maintain ETL workflows using Ab Initio.
Manage and support critical data pipelines and data sets across complex,high-volume environments.
Perform data analysis and troubleshoot issues across Teradata and Oracle data sources.
Collaborate with DevOps for CI/CD pipeline integration using Jenkins, and manage deployments in Unix/Linux environments.
Participate in Agile ceremonies including stand-ups, sprint planning, and roadmap discussions.
Support cloud migration efforts, including potential adoption of Azure,Databricks, and PySparkbased solutions.
Contribute to project documentation, metadata management (LDM, PDM), onboarding guides, and SOPs
Preferred candidate profile
3 years of experience in data engineering, with proven expertise in ETL development and maintenance.
Proficiency with Ab Initio tools (GDE, EME, Control Center).
Strong SQL skills, particularly with Oracle or Teradata.
Solid experience with Unix/Linux systems and scripting.
Familiarity with CI/CD pipelines using Jenkins or similar tools.
Strong communication skills and ability to collaborate with cross-functional teams.
Job Classification
Industry: Courier / Logistics (Logistics Tech) Functional Area / Department: Data Science & Analytics Role Category: Data Science & Analytics - Other Role: Data Science & Analytics - Other Employement Type: Full time