Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3,Glue. Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments. Experience in analyzing, re-architecting, and re-platforming on-premise data warehouses to data platforms on AWS. Should be comfortable in building and optimizing performant data pipelines which include data ingestion, data cleansing and curation into a data warehouse, database, or any other data platform using Python. Must have solid understanding of data structures and algorithms. Must be extremely proficient with python core libraries and Object-Oriented Programming concepts. Must have experience working with REST API. Maintenance and optimization of existing processes Experience in writing production ready code in Python and test, participate in code reviews to maintain and improve code quality, stability, and supportability. Experience in designing data warehouse/data mart. Experience with any RDBMS preferably SQL Server and must be able to write complex SQL queries. Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements Expertise in requirement gathering, technical design and functional documents. Experience in Agile/Scrum practices. Experience in leading other developers and guiding them technically. Experience in deploying data pipelines using automated CI/CD approach. Ability to write modularized reusable code components. Proficient in identifying data issues and anomalies during analysis. Strong analytical and logical skills. Must be able to comfortably tackle new challenges and learn. Must have strong verbal and written communication skills.
Employement Category:
Employement Type: Full time Industry: IT - Software Role Category: Application Programming / Maintenance Functional Area: Not Applicable Role/Responsibilies: Python SQL Consultant