Job Description
Position Title: Senior Software Developer, Data Platforms
Position Summary:
The enterprise data platforms team requires a senior software developer to help evolve the Enterprise Data Lake, Operational Database, Data Quality/Testing/Validation and related big data-based or Cloud initiatives. This individual will uniquely contribute to the success of both our product and data teams by being flexible and responsive to project requests while delivering high quality solutions and services that support new demands for timely, integrated and reliable data while integrating to and preserving critical legacy components of the architecture.
As developer on this team you will collaborate with cross functional teams across the organization. You will be responsible for code and various data management activities that meet project and organizational requirements through collaboration with application developers, data and solution architects, devops engineers, scrumasters, product owners, QA, technical directors and account managers. The successful candidate will be adept at working in a dynamic environment, know how to deal with ambiguity and have a determined nature to learn, troubleshoot code and complete tasks that involve different aspects of multiple large distributed database systems.
Performance Objectives:
- Develop an understanding of the Tally platform and Operational/Analytics needs within 30 daysUnderstand the software stack as a whole, the systems and data architecture, ETL processes and frameworks and maintenance practices.
- Understand application interactions with the database.
- Understand the projects scope in order to identify when requirements are out of scope and require a change order.
- Understand ICF Nexts documented source control and branch management guidelines.
- Demonstrate understanding of existing streaming/ingest pipelines and tenant implementation of Data Lake zones.
- Help drive and maintain high quality standard of data architecture and database code changes within Microsoft SQL Server, Hadoop/Big Data Ecosystem, Elasticsearch, and CloudPerform all phases of data engineering including requirements analysis, application design, code development and testing.
- Use previous experience to evolve the data platform and expand use cases within Hadoop services and the Enterprise Data Lake across multiple functional tenant needs
- Respect, implement and manage strong data governance and security practices especially with respect to the Data Lake
- Adhere to ICF Nexts security practices and perform regular security reviews throughout project life cycle
- Estimate engineering work effort and effectively identify and prioritize the high impact tasks.
- Troubleshoot production support issues and identify solutions as required to backup the team for Operational activities.
- Ensure code is efficient and optimized for best performance.
- Ensure that objects are modeled appropriately.
- Review and test code changes in lower environments.
- Lead and evangelize good testing and automation practices that foster a quality control mindset with new and existing team members
- Understand, manage and troubleshoot jobs and monitoring software while contributing scripts to improve predictability of system health, database or Hadoop cluster.
Effectively work with the team and team workflow toolset to manage communication, status, issues and code quality.Should have basic experience with Atlassian tools, like SourceTree/Bitbucket, JIRA and Confluence.Create JIRA Tickets with enough information for the development team to estimate and resolve issues in a timely manner.Competently use version control (Git) to manage topic branches and Pull Requests.Follow ICF Nexts documented source control and branch management guidelines.Review code and provide feedback relative to best practices and improving performance.Create and update tickets with enough information for the team to estimate and resolve issues in a timely manner.Follow build and automation practices to support continuous integration and improvement.Required Qualifications:
- 7+ years of experience with SQL Server (T-SQL) versions 2016+ in medium-large database implementations or comparable RDBMS.
- Demonstrated success migrating data between disparate systems.
- Unique ability to rapidly adopt the latest data engineering and testing automation technologies in support of the data architecture.
- Ability to mentor and share knowledge effectively to help expand expertise throughout the organization
- Ability to lead and coordinate remote development teams
- Experience with at least some of the Big Data and NoSQL technologies like Hadoop/hdfs, Sqoop, Hive, Pig, Kafka, Storm and Spark.
- Knowledge of Hadoop file formats (e.g. Avro, Parquet, Orc, etc.) and their applicable use cases.
- Champion for enforcement of data management and engineering best practices and keen focus on Data Lake organization and management of disparate data sources for Intake, Integration/Aggregation and Consumption of the Data Lake.
- Strong commitment toward preservation of data lineage, quality and integrity.
- Ability to drive positive development testing standards and practices within the team to include unit testing, integration testing, and performance testing as needed to ensure optimal data management, governance and delivery.
- Understanding of OLTP, OLAP/Data Warehouse (star schema) and mixed workloads and when to best implement each as an architectural pattern.
- High level of competency writing SQL queries and with relational database modeling and design.
- A solid grasp of the Git version control system.
- Ability to learn and expand use of Powershell to manage and monitor databases.
- Ability to apply quality assurance principles to the data engineering and architecture disciplines.
- Experience with Elasticsearch and integration to SQL Server or Hadoop ecosystem and components.
- Basic knowledge of SQL Server Integration Services (SSIS) and SQL Server Reporting Services (SSRS).
Bonus Qualifications:
- Scaled Agile Framework (SAFe)
- Cloud Data (especially AWS)
- Python/Scala/Java, Spark
- Analytics/Data Science
- Additional SQL RDBMS (postgresql,mysql, )
- Additional Document-store (mongodb, )
Job Classification
Industry: Recruitment, Staffing
Functional Area: IT Software - Application Programming, Maintenance,
Role Category: Programming & Design
Role: Programming & Design
Employement Type: Full time
Education
Under Graduation: Any Graduate in Any Specialization
Post Graduation: Any Postgraduate in Any Specialization
Doctorate: Any Doctorate in Any Specialization
Contact Details:
Company: ICF Consulting India
Address: .
Location(s): Bengaluru
Keyskills:
Java
Global Marketing
Software Development
NoSQL
GIT
Postgresql
MySQL
Kafka
MS SQL Server
MongoDB