Job Description
About the Team & Role: adidas is a leading innovator in the data and analytics space. We specialize in building and managing cutting-edge data lakehouse platforms that empower organizations to extract actionable insights from their data. We are seeking a highly skilled and motivated DevOps Engineer to join our Infrastructure Automation team. If you have extensive experience with Databricks, AWS, Infrastructure as Code (Terraform), CI/CD tools (Jenkins, Bitbucket), Python and Shell scripting, we encourage you to apply for this exciting opportunity to contribute to the growth of our Data Lakehouse platform. You will have the fun of working in a very dynamic and forward-thinking team. As a DevOps Engineer within our Infrastructure Automation (Platform Engineering) team in Data & Analytics department, you will play a crucial role in designing, implementing, and maintaining the infrastructure that underpins our data lakehouse platform. You will work closely with cross-functional teams to. streamline design and delivery processes. Your expertise in Databricks, AWS, Terraform, CI/CD tools (Jenkins, Bitbucket), Python, Shell scripting and Grafana, Prometheus will be instrumental in achieving high levels of automation, scalability and performance. Must-to-have Tool expertise: AWS, Terraform, CI/CD tools (Jenkins, Bitbucket), Python, Databricks Good-to-have Tool expertise: Shell scripting, Grafana, Prometheus Key Responsibilities: Infrastructure Automation: Design, build, and maintain infrastructure automation solutions using Terraform to ensure the robustness and scalability of our data lakehouse platform. CI/CD Implementation: Develop and enhance CI/CD pipelines using Jenkins and Bitbucket, with a focus on automation and code quality. Databricks Management: Configure, manage and optimize Databricks clusters and workspaces to facilitate data processing and analytics. AWS Cloud Management: Leverage AWS services like Storage, IAM, Network configs, etc to enhance the performance and scalability of our infrastructure. Scripting: Create and maintain Terraform, Python and Shell scripts for Infrastructure provisioning automations. Monitoring and Logging: Implement robust monitoring and logging solutions to detect & troubleshoot platform infra issues proactively which will help the relevant Operations Team Security and Compliance: Ensure the security and compliance of all the infrastructures by implementing best practices and adhering to company policies. Documentation: Maintain comprehensive documentation for all infrastructure and automation processes. Collaboration: Collaborate with cross-functional teams, including data engineers, data scientists, BI engineers and Data Lakehouse Architects to understand the requirements and engineer the infrastructures for the entire Data Lakehouse Platform Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field Minimum of 3+ years of hands-on experience with Databricks and AWS services Expertise in Infrastructure as Code (Terraform) Strong CI/CD experience with Jenkins and Bitbucket Proficient in shell scripting Understanding of DevOps best practices, including automation, continuous integration, and continuous delivery Excellent problem-solving and troubleshooting skills Familiarity with Agile methodologies Strong communication and teamwork skills AWS, Databricks, Terraform certifications are a plus
Employement Category:
Employement Type: Full time
Industry: Others
Role Category: General / Other Software
Functional Area: Not Applicable
Role/Responsibilies: Platform Engineer
Keyskills:
AWS
Jenkins
Bitbucket
Python
Shell scripting
Databricks
Terraform
Grafana
Prometheus