Data Monitoring Job Role: Full-Time Experience: 7 to 10 years Job Location: Hyderabad, Pune, Noida, Chennai and Bengaluru
Job Description: Experience with integration of different data sources with Data Lake is required Experience in Perform Design, Hands on development & Deployment using Hadoop, Spark, Scala, Hive, Kafka, SQL, Oozie Experience in optimal extraction, transformation, and loading of data from a wide variety of data sources Experience in working with Big Data eco-system including tools such as Hadoop, Spark, Scala, Hive, Kafka, SQL, Oozie Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience building and optimizing Big Data pipelines and data sets Extensive Experience around SQL Experience in solving Streaming use cases using Spark, Kafka Mandatory Skills: Mandatory Skills: Hadoop, Spark, Scala, Hive, Kafka, SQL, Oozie Nice to have skills: Python, Airflow
