Job Title: Digital: BigData and Hadoop Ecosystems Overview The Digital: BigData and Hadoop Ecosystems position plays a critical role in harnessing the vast amounts of data generated by organizations every day. As organizations continue to prioritize data-driven decision-making, this role ensures efficient management and processing of big data through the Hadoop ecosystem. By leveraging cutting-edge technologies such as the Hadoop Distributed File System (HDFS), Apache Spark, and various data processing frameworks, the incumbent will drive data collection, analysis, and interpretation efforts. This role is pivotal in constructing robust and scalable data architectures that not only support business operations but also enhance analytics capabilities. Given the rapid growth of data in the digital landscape, the expertise brought to this position will empower business units with the insights necessary to achieve strategic objectives. The successful candidate will collaborate closely with data scientists, analysts, and other IT professionals to create a data-driven culture and support ongoing innovation in practices. Key Responsibilities Design, implement, and maintain scalable Hadoop architectures. Develop and execute ETL processes to ingest data from various sources. Analyze large datasets to extract actionable insights. Implement data storage solutions using HDFS and related technologies. Optimize performance of Hadoop jobs and clusters. Collaborate with cross-functional teams on big data projects. Utilize Apache Spark for large-scale data processing. Monitor and troubleshoot issues within the Hadoop ecosystem. Conduct data quality assurance and validation checks. Provide technical leadership in big data initiatives. Design and develop data models and schemas. Create prototypes and proofs of concept for data solutions. Maintain documentation of data architecture and processes. Train and mentor junior staff on Hadoop and big data technologies. Stay updated with emerging technologies in big data and analytics. Required Qualifications Bachelor's degree in Computer Science, Information Technology, or related field. A minimum of 5 years of experience working with big data technologies. Extensive experience with the Hadoop ecosystem (Hadoop, Spark, HDFS, etc.). Strong proficiency in SQL and relational database management. Hands-on experience in programming with Python or Java. Familiarity with ETL tools and processes. Knowledge of cloud computing platforms related to big data (AWS, Azure, Google Cloud). Experience in data modeling and normalization techniques. Ability to analyze and interpret complex datasets. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Familiarity with data visualization tools (Tableau, Power BI, etc.). Experience in working with NoSQL databases (e.g., MongoDB, Cassandra). Proven ability to manage multiple projects and deadlines. Certifications in big data technologies are a plus. Experience in Agile methodologies preferred. Skills: ecosystem,data visualization tools (tableau, power bi),nosql databases (e.g., mongodb, cassandra),data visualization,data modeling,cloud computing (aws, azure, google cloud),apache spark,python,java,sql,hadoop,python scripting,hdfs,etl processes,agile methodologies,big data,
Employement Category:
Employement Type: Full timeIndustry: IT Services & ConsultingRole Category: Application Programming / MaintenanceFunctional Area: Not SpecifiedRole/Responsibilies: Digital BigData and Hadoop Ecosystems Job in