Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Strong focus on AWS and PySpark.
Knowledge of AWS services, including but not limited to S3, Redshift, Athena, EMR, and Glue.
Proficiency in PySpark and related Big Data technologies for ETL processing.
Strong SQL skills for data manipulation and querying.
Familiarity with data warehousing concepts and dimensional modeling.
Experience with data governance, data quality, and data security practices.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills to work effectively with cross-functional teams.
Job Classification
Industry: IT Services & Consulting Functional Area / Department: Engineering - Software & QA Role Category: Software Development Role: Data Engineer Employement Type: Full time