Role - Lead Consultant
Technology - Data Engineering & Data Analytics, Energy (Oil&Gas) domain
Location - London/Sunbury, UK
Business Unit - SURENRGY
Compensation - Competitive (including bonus)
Your role
As a Lead Consultant, you will be involved in the entire software development lifecycle, from conception to completion. The data modeler designs, implements, and documents data architecture and data modelling solutions, which include the use of relational, dimensional, and NoSQL databases. These solutions support enterprise information management, business intelligence, machine learning, data science, and other business interests.
Your responsibilities will include:
- Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics, and machine learning).
- Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models.
- Define and govern data modelling and design standards, tools, best practices, and related development for enterprise data models.
- Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization.
- Hands-on modelling, design, configuration, installation, performance tuning, and sandbox POC.
- Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks.
- Design and document solutions to ensure maintainability and conduct reviews to ensure solutions follow best practices and solve the business problem.
- Work collaboratively with delivery teams including users, data scientists, statisticians, and analysts to develop data solutions and pipelines that meet their analytical and reporting needs.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Optimize the performance of corporate data platforms and services to ensure that they can handle large volumes of data and support real-time processing and analysis.
- Own and manage the end-to-end data platform, including data ingestion and processing while helping determine appropriate storage policies and access management processes.
- Contribute to the emergent architecture with the Solution Architect and fellow team members.
- Develop the code and necessary tests to build quality functionalities as per the Acceptance Criteria.
- Build the necessary DevOps capabilities like Continuous Integration (CI), Deployment (CD) and Testing (CT).
- Participate in demonstrations of the product to the Product Owners during the Iteration review and System Demo.
- Design and build metadata driven, event based distributed data processing platform using technologies such as Python, Airflow, Redshift, DynamoDB, AWS Glue, S3.
- Develop a comprehensive understanding of the organization's data structures and metrics, advocating changes for product development.
- Build and optimize data ingestion pipelines using AWS services such as Lambda, Kinesis, and Glue.
- Develop, maintain, and optimize transformation pipelines within our Delta Lake on Databricks.
Required
- Strong understanding of Energy (Oil & Gas) industry, and work experience with the domain applications.
- Strong experience in a Data Engineering Role with an extensive understanding and practical experience on all aspects like development (AWS, CI/CD), Testing and DevOps.
- Hands-on experience with relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols).
- Experience with data warehouse, data lake, and enterprise big data platforms in multi-data center contexts required.
- Good knowledge of metadata management, data modelling, and related tools (Erwin or ER Studio or others) required.
- AWS Glue, AWS Redshift/Spectrum, S3, API Gateway, Athena, Step and Lambda functions.
- Experience in ETL and ELT data integration pattern, designing and building data pipelines.
- End to end responsibility of Project Architecture/Design and operations.
- Development experience in one or more object-oriented programming languages (e.g. C++, C#, Java, Python, Golang, PowerShell, Ruby) Preferably Python.
- Database: PostgreSQL, Microsoft SQL, Oracle.
- Experience designing and implementing large-scale distributed systems.
- Use of Azure DevOps for backlog, wiki, code repositories and CI/CD pipelines.
Personal
- High analytical skills.
- A high degree of initiative and flexibility.
- High customer orientation.
- High quality awareness.
- Excellent verbal and written communication skills.
About Us
Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.
All aspects of employment at Infosys are based on merit, competence and performance. We are committed to embracing diversity and creating an inclusive environment for all employees. Infosys is proud to be an equal opportunity employer.