Roles & Responsibilities:
Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka
Creating and managing data warehouses, data lakes and data marts
Implementing and optimizing ETL processes and data pipelines
Developing and maintaining security and access controls
Troubleshooting and resolving big data platform issues
Collaborating with other teams to ensure the consistency and integrity of data
Foster a collaborative and supportive work environment, promoting open communication and teamwork.
Demonstrate strong leadership skills, with the ability to inspire and motivate team members to perform at their best.
Skills Requirements:
Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka.
Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams.
Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage.
Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala.
Lead and manage a team of professionals to achieve organizational goals.
Provide guidance, support, and mentorship to help employees grow and develop professionally and focus on Career Management.

Keyskills: hive apache pig impala spark hadoop azure data lake data processing big data technologies microsoft azure apache flink data engineering data mart apache mapreduce data science kafka aws etl data integration data lake etl process
If youre thinking scale, think bigger and dont stop there. At Walmart Global Tech India, we dont just innovate, we enable transformations across stores and different channels for the Walmart experience. \\\\\\\\r\\\\\\\\n \\\\\\\\r\\\\\\\\nA regular day at Walmart Global Tech India means using tech...