You are visiting this website from:
Find Talent Find a Job

Data Engineer

Back to all Jobs

Job Summary

  • Dublin
  • Contract
  • JN -022023-1925607
  • Feb 14, 2023
  • Competitive
Job Description

Our client is a leading multinational technology company looking for an experienced Data Engineer to join their team - 11 Month Contract

This role will be dealing with large sets of data. Looking for someone who is passionate and highly skilled in SQL, C++ and dealing with complex data algorithm.

Overall Responsibilities:

  • Independently execute end-to-end tasks that are integrated into a large overarching project.
  • Participate in data product, infrastructure, and solution design and development, with minimal assistance.
  • Test and validate completed work to ensure high-quality, with most or all of the hallmarks of a fleshed-out developer artifact.
  • Plan, manage, and execute prioritized project work, making progress independently without supervision by selecting appropriate methods to most effectively achieve project objectives.

Challenge:

  • Identify and fix problems with defined requirements and recommend creative ways to improve on solutions via selection of better methods or tools.
  • Address commonly escalated issues or triage when required, in a timely manner.
  • Help team course-correct and fix problems with known solutions.
  • Accommodate complex problem constraints (e.g., type of data, source, target interfaces, tech stack, data volume, change management of data, required frequency, required latency) and extract, transform, and load (ETL) needs.

Influence:

  • Work within one or more teams to communicate knowledge related to a broad set of tasks.
  • Identify key stakeholders to build networks, contribute to cross-team collaborations, and proactively spot and suggest areas of future work for the team.
  • Coordinate timelines, goals, and objectives for assigned component(s) of a project.
  • Partner across teams to define and implement solutions that improve internal business processes.
  • Demonstrate working knowledge of core data and role-related knowledge, company-wide technologies and methods, translating business requirements/needs into technical data solutions, and mastery of one major skill outside of core coding (e.g., monitoring, documentation, integration testing).
  • Apply direct experience and extrapolating knowledge to various hypothetical business situations and challenges.

Responsibilities under the direction of Line Manager:

Consult with users, partners, or decision makers

  • Consult with users, partners, or decisions makers to identify data sources, required data elements, or data validation standards, with minimal guidance.
  • Consult with application engineers to understand and influence logging/transactional storage.
  • Consult with Data scientists on ML training, feature engineering for ML models.

Design & implement data engineering solutions

  • Design and implement business solutions and infrastructure to build and scale common frameworks for use with minimal guidance.
  • Partner with upstream data providers to establish new source data pipelines. Follow and improve upon the local technical best practices, including making data discoverable, thinking about the lifecycle of data, and managing master data well.
  • Design, build, operationalize, secure, and monitor data processing systems with a particular emphasis on security and compliance, scalability and efficiency, reliability and fidelity, and flexibility and portability.

Develop data models, pipelines, and exchange formats

  • Develop and maintain data models, pipelines, and exchange formats to assist in the visualization, analysis, interpretation of data and for use of data in ML training/models, with some guidance.

Machine learning

  • Leverage, deploy, and continuously train pre-existing machine learning models. Learn and implement new storage/MPP systems/ML serving systems, with minimal guidance.

Perform preliminary data analysis

  • Perform data analysis and profiling utilizing relevant tools, leveraging custom data infrastructure, or existing data models with minimal guidance. Work with clients to understand their needs and clarify requirements or direct to more suitable channels. Enable data-driven decision-making by collecting, transforming, and publishing data.

Develop data models, pipelines, and exchange formats

  • Develop and maintain data models, pipelines, and exchange formats to assist in the visualization, analysis, interpretation of data and for use of data in ML training/models, with some guidance.

Provide ongoing support

  • Provide ongoing support for data users through maintenance of reports, queries, and dashboards with minimal guidance. Execute on plans for dependency changes (e.g., system migrations).

Visualization

  • Create and/or consult in creating data visualizations, visualization features using internal BI tools like PLX, Datastudio, and external tools like Tableau and Looker, with minimal guidance.

Skill/Experience/Education

  • Big data infrastructure:

Knowledge of trade-offs to consider in choosing storage systems and ability to identify appropriate storage technology available for the company.

  • Code comprehension and programming skills:

Ability to code in one or more core languages, produce readable code structure, review code, and leverage code as a resource for appropriate control and data structures.

  • Data exploration:

Ability to make meaningful sense of data models by performing exploratory queries and scripts.

  • Data pipeline(ETL) design and Data Modeling:

Ability to design data pipelines and dimensional data modeling, for synch and asynch system integration, and implementation using internal (e.g., PLX, Flume) and external stacks (e.g., DataFlow, Spark).

  • Information gathering skills:

Ability to gather and focus (separate important from irrelevant) information, determine the right questions to ask to fill information gaps, and apply the information collected.

  • Machine learning knowledge:

Machine learning Knowledge of machine learning coding (e.g., computer algorithms that enable automatic improvement of computer programs through experience), working knowledge of machine learning tasks including data prep for ML (e.g., feature engineering), choice of model and performance metrics, model evaluation and parameter tuning, and managing variance/bias trade-offs.

  • Project management:

Knowledge of project planning methodology, deployment planning process, and sequence. This includes the ability to understand, or to manage smaller parts or phases of a bigger project, or an understanding of how to translate project requirements into a plan that can be communicated clearly and implemented.

  • Stakeholder management:

Ability to create positive relationships with stakeholders through the appropriate management of expectations and agreed objectives, building trusting, collaborative relationships and rapport with different stakeholders and businesses. This includes being approachable, engaged, authentic, and relating well to people regardless of personality or background.

  • Statistics & BI tools:

Knowledge of statistical methodology and data consumption tools like business intelligence tools, collabs, jupyter notebooks, Tableau, Power BI and internal internal tools such as DataStudio and PLX Dashboards.

Morgan McKinley is acting as an Employment Agency and references to pay rates are indicative.

BY APPLYING FOR THIS ROLE YOU ARE AGREEING TO OUR TERMS OF SERVICE WHICH TOGETHER WITH OUR PRIVACY STATEMENT GOVERN YOUR USE OF MORGAN MCKINLEY SERVICES.

broadbean-tracking

Consultant Details