Job Description
Job Description
We are seeking a skilled Data Engineer with over 5+ years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making
The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools, Responsibilities
Key Responsibilities:
Exp: 5+ years
Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer (GCP), Control-M, Cron, Luigi, and similar tools, Build and optimize data architectures including data lakes and data warehouses, Integrate data from multiple sources ensuring data quality and consistency, Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions, Analyze complex datasets to identify trends, generate actionable insights, and support decision-making, Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation, Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB, Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow, Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery, Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools, Ensure data governance, security, and compliance standards are met, Participate in Agile and DevOps processes to enhance data engineering workflows, Requirements
Required Qualifications:
5+ years of professional experience in data engineering and data analysis roles, Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB, Hands-on experience with big data tools like Hadoop and Apache Spark, Proficient in Python programming, Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks, Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi, Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow, Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory), Understanding of data modeling concepts and data lake/data warehouse architectures, Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows, Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB, Exposure to Agile and DevOps methodologies, Experience with Amazon Web Services (S3, Glue, Redshift, Lambda, Athena)
Nice to have
Preferred Skills
Strong problem-solving and communication skills, Ability to work independently and collaboratively in a team environment, Experience with service development, REST APIs, and automation testing is a plus, Familiarity with version control systems and workflow automation, We offer
Opportunity to work on bleeding-edge projects
Work with a highly motivated and dedicated team
Competitive salary
Flexible schedule
Benefits package medical insurance, sports
Corporate social events
Professional development opportunities
Well-equipped office
About Us
Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services
Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation
A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience
Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India, Show more Show less
Job Classification
Industry: IT Services & Consulting
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time
Contact Details:
Company: Grid Dynamics
Location(s): Hyderabad
Keyskills:
python
computer science
data integration tools
design
cloud platforms
big data
communication skills