Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Big-data Technical Leads/ Architects @ Datametica

Home > Programming & Design

 Big-data Technical Leads/ Architects

Desired Candidate Profile




Experience: 7+ Years

Primary Skills: Pig, HIVE, Sqoop, Flume, Mapreduce, Hbase, Hadoop DataMetica is seeking a Big Data Architect with expert level experience in Big Data Technologies and implementing large scale distributed data processing systems. This is a very challenging role at DataMetica with an opportunity for building innovative Hadoop/Big Data products and solutions. You will be responsible to build end to end solutions from scratch. In addition to that you will also be responsible for improving the existing distributed system architecture in our client environment. As an Architect you would also be part of COE where you would be involved in solving complex business problems, technology comparisons and analyzing and extending distributed architecture patterns and solutions. You would directly working with Chief architects/CTOs/CEOs of clients.
Responsibilities:

  • Hands on technical role; contribute to all phases of the software development lifecycle, including the analysis, architecture, design, implementation, and QA
  • Collaboration on requirements; work with the Engineering, Product Management and Client Success teams to define features to help improve results for our customers and ourselves alike
  • Partner with our Data Mining and Analytics team to do analysis, build predictive models & optimization algorithms, and run experiments
  • Work closely with Operations/IT to assist with requirements and design of Hadoop clusters to handle very large scale; help with troubleshooting operations issues
Requirements:

  • The Hadoop Architect should have a solid background in the fundamentals of computer science, distributed computing, large scale data processing as well as mastery of database designs and data warehousing. The person should have a high degree of self motivation, an unwavering commitment to excellence, excellent work ethic, positive attitude, and is fun to work with.
  • Expertise in building massively scalable distributed data processing solutions with Hadoop, Hive & Pig
  • Proficiency with Big Data processing technologies (Hadoop, HBase, Flume, Oozie)
  • Deep experience with distributed systems, large scale non-relational data stores, map-reduce systems, data modeling, database performance, and multi-terabyte data warehouses
  • Experience in Data Analytics, Data mining, Predictive Modeling
  • Experience in building data pipelines and analysis tools using Java, Python
  • Hands on Java experience building scalable solutions
  • Experience building large-scale server-side systems with distributed processing algorithms.
  • Aptitude to independently learn new technologies
  • Strong problem solving skills
  • Experience designing or implementing systems which work with external vendors' interfaces
  • Ability to communicate with internal teams.
  • Exposure to ISMS policies and procedures.

Education:

UG:   B.Tech/B.E. - Computers

Contact Details:

+ View Contactajax loader


Keyskills:   Hadoop pig hive sqoop flume Hbase Mapreduce

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ Not Disclosed

Datametica

Datametica Solutions Pvt Ltd DataMetica is the leader in Big Data architecture, Advanced Analytics and Big Data Operations focused on serving large global companies. We provide a fast and reliable integration of Hadoop and related technologies into enterprise operations. Our team is comprised of...