Job Description - Data Engineer
We are seeking a highly skilled Data Engineer with extensive experience in Snowflake, Data Build Tool (dbt), Snaplogic, SQL Server, PostgreSQL, Azure Data Factory, and other ETL tools. The ideal candidate will have a strong ability to optimize SQL queries and a good working knowledge of Python. A positive attitude and excellent teamwork skills are essential.
Key Responsibilities:
Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Snowflake, DBT, Snaplogic, and ETL tools.
SQL Optimization: Write and optimize complex SQL queries to ensure high performance and efficiency.
Data Integration: Integrate data from various sources, ensuring consistency, accuracy, and reliability.
Database Management: Manage and maintain SQL Server and PostgreSQL databases.
ETL Processes: Develop and manage ETL processes to support data warehousing and analytics.
Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions.
Documentation: Maintain comprehensive documentation of data models, data flows, and ETL processes.
Troubleshooting: Identify and resolve data-related issues and discrepancies.
Python Scripting: Utilize Python for data manipulation, automation, and integration tasks.
Technical Skills:
- Proficiency in Snowflake, DBT, Snaplogic, SQL Server, PostgreSQL, and Azure Data Factory.
- Strong SQL skills with the ability to write and optimize complex queries.
- Knowledge of Python for data manipulation and automation.
- Knowledge of data governance frameworks and best practices
- Soft Skills:
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Positive attitude and ability to work well in a team environment.
- Certifications: Relevant certifications (e.g., Snowflake, Azure) are a plus.