What job we expect you to do:
- Be able to gather and understand client objectives in the data engineering space; propose optimal database management/ETL pipeline management solution proposition and help develop overall roadmap
- Effective collaboration with Sales team to enhance our pre-sales efforts in data engineering space
- Guide the team working on various data engineering projects
- Connecting, designing, scheduling, and deploying data warehouse systems
- Developing data pipelines and enable dash boards for stakeholders
- Develop, construct, test and maintain system architectures
- Create best practices for data loading and extraction
- Leading quick POCs for any data eccentric development task
What skills we expect you to bring:
- Strong programing skills, being well versed in Object-Oriented Programming system (OOPS), data structures, and algorithms
- Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform
- Be comfortable with SQL (mandatory), Python(mandatory), Scala (good to have) to manipulate and prepare data and conduct various analysis as needed
- Experience on AWS/Azure stack
- Experience of working on cloud based data warehousing platform like Snowflake would be an added advantage
- Should be comfortable with schema designing
- Experience in distributed computing environment
- Experience in structured/unstructured data and batch processing/real-time processing (good to have)
- Readingwriting data tofrom various sources - APIs, cloud storage, databases, big data platforms
- Experience of working with Big Data environment such as Hadoop and the ecosystem
- Creating web services to allow create, read, update and delete (CRUD) operations
- Competent in project management framework such as Agile
- Prior experience of working in CPG and/or Retail sector would be an added advantage
- Good leadership skills
- Excellent communication skills, both written and verbal
What Tools and Technologies we expect you to know We understand one cannot be master of all.
- SQL
- Python - pandas, pySpark
- Hadoop ecosystems (HDFS, HIVE, MapReduce, Pig, Spark, Hadoop etc.)
- Apache Spark
- Snowflake
- AWS/Azure
- Kafka
- Linux
- Airflow
How many years of experience you need
At least 8 years of relevant experience
Where will be the job location
We are ready to welcome you at Noida and Gurgaon and also open for any other location in India
Keyskills: finance sales ltd mis accountancy big data web services data loading cloud storage data cleaning data structures data warehousing data engineering project management pipeline management
Gilbarco Inc., doing business as Gilbarco Veeder-Root, is a supplier of fuel dispensers, point of sale systems, payment systems, forecourt merchandising and support services. The company operates as a subsidiary of Fortive and its headquarters are located in Greensboro, North Carolina, United Stat...