- We are implementing a Media Mix Optimization (MMO) platform designed to analyze and optimize marketing investments across multiple channels
- This initiative requires a robust on-premises data infrastructure to support distributed computing, large-scale data ingestion, and advanced analytics
- The Data Engineer will be responsible for building and maintaining resilient pipelines and data systems that feed into MMO models, ensuring data quality, governance, and availability for Data Science and BI teams
- The environment integrates HDFS for distributed storage, Apache NiFi for orchestration, Hive and PySpark for distributed processing, and Postgres for structured data management
- This role is central to enabling seamless integration of massive datasets from disparate sources (media, campaign, transaction, customer interaction, etc.), standardizing data, and providing reliable foundations for advanced econometric modeling and insights.
Responsibilities:
Data Pipeline Development & Orchestration
o Design, build, and optimize scalable data pipelines in Apache NiFi to
automate ingestion, cleansing, and enrichment from structured, semi-structured, and unstructured sources.
Ensure pipelines meet low-latency and high-throughput requirements for distributed processing.
Data Storage & Processing
o Architect and manage datasets on HDFS to support high-volume,
fault-tolerant storage.
o Develop distributed processing workflows in PySpark and Hive to
handle large-scale transformations, aggregations, and joins across
petabyte-level datasets.
o Implement partitioning, bucketing, and indexing strategies to
optimize query performance.
Database Engineering & Management
o Maintain and tune Postgres databases for high availability, integrity,
and performance.
o Write advanced SQL queries for ETL, analysis, and integration with
downstream BI/analytics systems.
Collaboration & Integration
o Partner with Data Scientists to deliver clean, reliable datasets for
model training and MMO analysis.
o Work with BI engineers to ensure data pipelines align with reporting
and visualization requirements.
Monitoring & Reliability Engineering
o Implement monitoring, logging, and alerting frameworks to track data pipeline health.
o Troubleshoot and resolve issues in ingestion, transformations, and
distributed jobs.
Data Governance & Compliance
o Enforce standards for data quality, lineage, and security across systems.
o Ensure compliance with internal governance and external regulations.
Documentation & Knowledge Transfer
o Develop and maintain comprehensive technical documentation for
pipelines, data models, and workflows.
o Provide knowledge sharing and onboarding support for cross-
functional teams.