Job Description
3-10 years of experience in Python, PySpark. Good understanding of SQL
Familiar with AWS Cloud Platform and its tools and services (e.g. S3, EC2, EMR, Spark, DynamoDB, Athena)
Understanding of big data processing tools such as spark and Airflow
Knowledge of machine learning and deep learning algorithms
Solid understanding of data structures and algorithms, no-SQL databases etc.
Has DevOps tooling user skillset in using Jira/Confluence, GIT and a CICD tool
Good understanding of big data platforms and tools such as Hadoop, Spark etc.
Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
Experience working on AWS Services
Experience working web crawling and developing api.
Machine Learning API such as Keras, Tensorflow, SK-learn
Employement Category:
Employement Type: Full time
Industry: Full time
Functional Area: IT
Role Category: IT
Role/Responsibilies: Looking for immediately available Data Engineer.
Contact Details:
Company: Apar Technologies
Location(s): Mumbai