Excellent Programming skills in Python with object oriented design
Strong programming skills using pyspark
Experience in designing solution for business applications and deployment of the same.
Hands on experience in working with relational database like SQL, postgres database
Hands on experience on developing ETL solutions on Big Data Clusters like MAPR Clusters or on cloud platform like Azure
Hands on experience in workflow schedulers like Airflow, Oozie etc.,
Hands on experience on with Azure Databricks, Azure Data Factory is an added advantage
Good knowledge on microservices, docker, containers is a plus.
Experience in creation of CI/CD pipeline is an added advantage
Excellent problem solving skills
Experience in the automotive field and exposure of multi-cultural environment is an added advantage
Key Job Responsibilities:
Architecting, developing and deploying end to end pipeline to ingest various ADAS systems data into the clusters
Develop scalable solutions and improve efficiency and the reliability of the existing application
Develop and deploy new features based on the business need
Ideates and improvises on the requirements through continuous innovation and application of new techniques and methods
Understand new features requirement from counterparts, breakdown of task and participate in planning
Good communication skills and ability to work in & contribute as a good team player
Employement Category:
Employement Type: Full time Industry: IT - Software Role Category: General / Other Software Functional Area: Not Applicable Role/Responsibilies: Big Data Architect