Job Overview:
Seeking a skilled Snowflake Engineer with 5 to 7 years of experience to join our team. The ideal candidate will play a key role in the development, implementation, and optimization of data solutions on the Snowflake cloud platform. The candidate should have strong expertise in Snowflake, PySpark, and a solid understanding of ETL processes, along with proficiency in data engineering and data processing technologies.
This role is essential for designing and maintaining high-performance data pipelines and data warehouses, focusing on scalability and efficient data storage, with a specific emphasis on transforming data using PySpark.
Key Responsibilities:
Snowflake Data Warehouse Development:
o
Design, implement, and optimize data warehouses on the Snowflake cloud platform.
o
Ensure the effective utilization of Snowflakes features for scalable, efficient, and high-performance data storage and processing.
Data Pipeline Development:
o
Develop, implement, and optimize end-to-end data pipelines on the Snowflake platform.
o
Design and maintain ETL workflows to enable seamless data processing across systems.
Data Transformation with PySpark:
o
Leverage PySpark for data transformations within the Snowflake environment.
o
Implement complex data cleansing, enrichment, and validation processes using PySpark to ensure the highest data quality.
Collaboration:
o
Work closely with cross-functional teams to design data solutions aligned with business requirements.
o
Engage with stakeholders to understand business needs and translate them into technical solutions.
Optimization:
o
Continuously monitor and optimize data storage, processing, and retrieval performance in Snowflake.
o
Leverage Snowflakes capabilities for scalable data storage and data processing to ensure efficient performance.
Required Qualifications:
Experience:
o
5 to 7 years of experience as a Data Engineer, with a strong emphasis on Snowflake.
o
Proven experience in designing, implementing, and optimizing data warehouses on the Snowflake platform.
o
Expertise in PySpark for data processing and analytics.
Technical Skills:
o
Snowflake: Strong knowledge of Snowflake architecture, features, and best practices for data storage and performance optimization.
o
PySpark: Proficiency in PySpark for data transformation, cleansing, and processing within the Snowflake environment.
o
ETL: Experience with ETL processes to extract, transform, and load data into Snowflake.
o
Programming Languages: Proficiency in Python, SQL, or Scala for data processing and transformations.
Data Modeling:
o
Experience with data modeling techniques and designing efficient data schemas for optimal performance in Snowflake.
Skills
Snowflake,Pyspark,Sql,Etl

Carnation Infotech a sister company of a New Jersey based IT and Staffing consulting services, Cardinal Technology Inc services to Fortune 1000 organizations. We service large corporate, Fortune 1000 companies such as Johnson & Johnson, Opko Health Inc., AT&T, NBC, Bed Bath & Beyond, etc...