Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Lead Pyspark Developer @ Synechron

Home > Software Development

 Lead Pyspark Developer

Job Description

job requisition idJR1027452Overall Responsibilities:
  • Data Pipeline Development:Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
  • Data Ingestion:Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
  • Data Transformation and Processing:Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
  • Performance Optimization:Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
  • Data Quality and Validation:Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
  • Automation and Orchestration:Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
  • Monitoring and Maintenance:Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
  • Collaboration:Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
  • Documentation:Maintain thorough documentation of data engineering processes, code, and pipeline configurations.

  • Software :
  • Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Strong scripting skills in Linux.

  • Category-wise Technical Skills:
  • PySpark:Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
  • Cloudera Data Platform:Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
  • Data Warehousing:Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
  • Big Data Technologies:Familiarity with Hadoop, Kafka, and other distributed computing tools.
  • Orchestration and Scheduling:Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
  • Scripting and Automation:Strong scripting skills in Linux.

  • Experience:
  • 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.
  • Proven track record of implementing data engineering best practices.
  • Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform.

  • Day-to-Day Activities:
  • Design, develop, and maintain ETL pipelines using PySpark on CDP.
  • Implement and manage data ingestion processes from various sources.
  • Process, cleanse, and transform large datasets using PySpark.
  • Conduct performance tuning and optimization of ETL processes.
  • Implement data quality checks and validation routines.
  • Automate data workflows using orchestration tools.
  • Monitor pipeline performance and troubleshoot issues.
  • Collaborate with team members to understand data requirements.
  • Maintain documentation of data engineering processes and configurations.

  • Qualifications:
  • Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.
  • Relevant certifications in PySpark and Cloudera technologies are a plus.

  • Soft Skills:
  • Strong analytical and problem-solving skills.
  • Excellent verbal and written communication abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Attention to detail and commitment to data quality.

  • Job Classification

    Industry: IT Services & Consulting
    Functional Area / Department: Engineering - Software & QA
    Role Category: Software Development
    Role: Data Engineer
    Employement Type: Full time

    Contact Details:

    Company: Synechron
    Location(s): Chennai

    + View Contactajax loader


    Keyskills:   cloudera hive pyspark linux hadoop scala amazon redshift data warehousing emr sql docker apache java spark gcp etl big data hbase data lake python oozie airflow microsoft azure impala data engineering nosql amazon ec2 mapreduce kafka sqoop aws

     Job seems aged, it may have been expired!
     Fraud Alert to job seekers!

    ₹ Not Disclosed

    Similar positions

    Python with Fast API Developer

    • Hexaware Technologies
    • 7 - 12 years
    • Bengaluru
    • 13 hours ago
    ₹ 15-30 Lacs P.A.

    Java Full Stack Developer

    • Accenture
    • 12 - 20 years
    • Hyderabad
    • 17 hours ago
    ₹ Not Disclosed

    SDET Technical Lead

    • Wipro HR Soniya
    • 5 - 8 years
    • Hyderabad
    • 3 days ago
    ₹ Not Disclosed

    Application Lead

    • Accenture
    • 15 - 20 years
    • Bengaluru
    • 3 days ago
    ₹ Not Disclosed

    Synechron

    We are a digital solutions and technology services company that partners with global organizations across industries to achieve digital transformation. With a strong track record of innovation, investment in digital solutions, and commitment to client success, at Zensar, you can help clients achieve...