Streaming data Technical skills requirements :-
Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred)
Skills Required-
- Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR- Hands-on experience of programming language like Scala with Spark.
- Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases
- Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred
- Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka Flink Spark Streaming)
- Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.)
- Feature Engineering Data Processing to be used for Model development- Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
- Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing
- Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data
- Must be very strong in writing SQL queries.
Coforge is a global digital services and solutions provider, that enables its clients to transform at the intersect of domain expertise and emerging technologies to achieve real-world business impact. A focus on very select industries, a detailed understanding of the underlying processes of those in...