Streaming data Technical skills requirements :-
Mandatory Skills-
Hands on experience with Spark, Scala, AWS (Lambda, Glue, S3)
-Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred)
- Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR
- Hands-on experience of programming language like Scala with Spark.
- Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred
- Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena
- Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.)
- Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries - Strengthen the Data engineering team with Big Data solutions - Strong technical, analytical, and problem-solving skills Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred)
- Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR
- Hands-on experience of programming language like Scala with Spark.

Keyskills: SCALA Hadoop Spark AWS Scala Programming Big Data Big Data Technologies