Job Description
Job Summary Synechron is seeking a highly skilled and proactive
Data Engineer to join our dynamic data analytics team. In this role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and solutions on the Google Cloud Platform (GCP). With your expertise, you'll enable data-driven decision-making, contribute to strategic business initiatives, and ensure robust data infrastructure. This position offers an opportunity to work in a collaborative environment with a focus on innovative technologies and continuous growth.
Software Requirements Required:Proficiency in Data Engineering tools and frameworks such as Hive, Apache Spark, and Python (version 3.x)Extensive experience working with Google Cloud Platform (GCP) offerings including Dataflow, BigQuery, Cloud Storage, and Pub/SubFamiliarity with Git, Jira, and Confluence for version control and collaborationPreferred:Experience with additional GCP services like DataProc, Data Studio, or Cloud ComposerExposure to other programming languages such as Java or ScalaKnowledge of data security best practices and tools Overall Responsibilities Design, develop, and optimize scalable data pipelines on GCP to support analytics and reporting needsCollaborate with cross-functional teams to translate business requirements into technical solutionsBuild and maintain data models, ensuring data quality, integrity, and securityParticipate actively in code reviews, adhering to best practices and standardsDevelop automated and efficient data workflows to improve system performanceStay updated with emerging data engineering trends and continuously improve technical skillsProvide technical guidance and support to team members, fostering a collaborative environmentEnsure timely delivery of deliverables aligned with project milestones Technical Skills (By Category) Programming Languages:EssentialPython (required)PreferredJava, ScalaData Management & Databases:Experience with Hive, BigQuery, and relational databasesKnowledge of data warehousing concepts and SQL proficiencyCloud Technologies:Extensive hands-on experience with GCP services including Dataflow, BigQuery, Cloud Storage, Pub/Sub, and ComposerAbility to build and optimize data pipelines leveraging GCP offeringsFrameworks & Libraries:Spark (PySpark preferred), Hadoop ecosystem experience is advantageousDevelopment Tools & Methodologies:Agile/Scrum methodologies, version control with Git, project tracking via JIRA, documentation on ConfluenceSecurity Protocols:Understanding of data security, privacy, and compliance standards Experience Requirements Minimum of 6-8 years in data or software engineering roles with a focus on data pipeline developmentProven experience in designing and implementing data solutions on cloud platforms, particularly GCPPrior experience working in agile teams, participating in code reviews, and delivering end-to-end data projectsExperience working with cross-disciplinary teams and understanding varied stakeholder requirementsExposure to industry best practices for data security, governance, and quality assurance is desired Day-to-Day Activities Attend daily stand-up meetings and contribute to project planning sessionsCollaborate with business analysts, data scientists, and other stakeholders to understand data needsDevelop, test, and deploy scalable data pipelines, ensuring efficiency and reliabilityPerform regular code reviews, provide constructive feedback, and uphold coding standardsDocument technical solutions and maintain clear records of data workflowsTroubleshoot and resolve technical issues in data processing environmentsParticipate in continuous learning initiatives to stay abreast of technological developmentsSupport team members by sharing knowledge and resolving technical challenges Qualifications Bachelor's or Masters degree in Computer Science, Information Technology, or a related fieldRelevant professional certifications in GCP (such as Google Cloud Professional Data Engineer) are preferred but not mandatoryDemonstrable experience in data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills, with a focus on outcome-driven solutionsExcellent communication and interpersonal skills to effectively collaborate within teams and with stakeholdersAbility to work independently with minimal supervision and manage multiple priorities effectivelyAdaptability to evolving technologies and project requirementsDemonstrated initiative in driving tasks forward and continuous improvement mindsetStrong organizational skills with a focus on quality and attention to detailSYNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law .Candidate Application Notice
Job Classification
Industry: Software Product
Functional Area / Department: Engineering - Software & QA
Role Category: Software Development
Role: Data Engineer
Employement Type: Full time
Contact Details:
Company: Synechron
Location(s): Bengaluru
Keyskills:
data engineering
cloud technologies
sql
gcp
data warehousing concepts
hive
python
confluence
data management
scala
pyspark
data warehousing
relational databases
dataproc
java
git
spark
cloud storage
scrum
hadoop
bigquery
agile
data flow
jira