Implement/support new data solutions in datalake/datawarehouse built on snowflake.
Develop and design data pipelines using python, spark and snowflake.
Implement infrastructure as code using cloudformation and terraform.
Design and Implement Continuous Integration/Continuous Deployments pipelines.
Perform Data Modelling using downstream requirements.
Develop transformation scripts using advanced SQL and DBT.
Create reports/dashboard using business intelligence tools such as looker and tableau.
Write test cases/scenarios to ensure incident free production release.
Collaborate closely with data analysts and different domains such as Finance, risk, compliance, product and engineering to fulfil requirements.
Identify areas of improvements in data pipelines, snowflake, infrastructure etc Conduct peer reviews of the code.
Debug production and development issues and provide support to colleagues where necessary.
Perform data quality checks to ensure quality of the data exposed to the end users.
Perform production deployments and perform a post-production support and validation (We follow a You build, you run it philosophy)
Build strong relationships with team, peers and stakeholders.
Contributes to overall data platform implementation.
Proficient in SQL/Python/Spark
Exposure to DBT would be preferable
Experience in AWS services such as Glue, Lambda, S3, DynamoDB, RDS, Kinesis, ECS/Fargate.
Experience working with modern data platforms such as redshift or snowflake
Experience working with BI tools such as looker and tableau
Experience working with docker
Proficient in data modelling.
Job Classification
Industry: IT Services & Consulting Functional Area / Department: Engineering - Software & QA Role Category: DBA / Data warehousing Role: Database Architect / Designer Employement Type: Full time