Experience building on AWS using S3, EC2, Redshift, Glue, EMR, DynamoDB, Lambda, Quick Sight, etc.
Experience in Pyspark/Spark / Scala
Experience using software version control tools (Git, Jenkins, Apache Subversion)
AWS certifications or other related professional technical certifications
Experience with cloud or on-premises middleware and other enterprise integration technologies.
Experience in writing MapReduce and/or Spark jobs.
Demonstrated strength in architecting data warehouse solutions and integrating technical components.
Good analytical skills with excellent knowledge of SQL.
8+ years of work experience with very large data warehousing environment
Excellent communication skills, both written and verbal
3+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT, and reporting/analytic tools.
3+ years of experience data modelling concepts
3+ years of Python and/or Java development experience
3+ years experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive)
Job Classification
Industry: IT Services & ConsultingFunctional Area / Department: Engineering - Software & QARole Category: Software DevelopmentRole: Data EngineerEmployement Type: Full time