Job Description
Senior Software Engineer - Lead Data Platform Engineer
Location: Dublin, Ireland
Type: Permanent, Full-Time
Our client is looking for a Lead Data Platform Engineer with a proven track record of driving product success from an engineering perspective. Looking for someone who has strong experience in building scalable and reliable data pipelines using Databricks and Spark. You will be working with various data sources and formats and transforming them into valuable insights for our business.
Responsibilities
- Holistic Learning: Dive into the intricacies of our platform, mastering its ins and outs.
- Strategic Contributions: Design, develop, and maintain data pipelines using Databricks and Spark, and other cloud technologies as needed.
- Hands-On Contribution: Tackle challenging tasks as a hands-on contributor, ensuring a seamless transition between various facets of this dynamic role throughout the workday.
- Optimize data pipelines for performance, scalability, and reliability.
- Ensure data quality and integrity throughout the data lifecycle.
- Collaborate with data scientists, analysts, and other stakeholders to understand and meet their data needs.
- Troubleshoot and resolve data-related issues and provide root cause analysis and recommendations.
- Document data pipeline specifications, requirements, and enhancements, and communicate them effectively to the team and management.
- Create new data validation methods and data analysis tools and share best practices and learnings with the data engineering community.
- Implement ETL processes and data warehouse solutions and ensure compliance with data governance and security policies.
- Be a proactive and enthusiastic member of our engineering community, always looking to scale learnings amongst your team, local colleagues, and wider company.
- Be passionate about guiding colleagues and growing as a team.
Requirements
- Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience
- Strong background in Computer Science fundamentals including algorithms, data structures, computational complexity, distributed computing, partitioning, z ordering.
- 10+ years of experience in software development and on designing developing and maintaining data pipelines.
- Expert knowledge of either Databricks or Spark
- Expert Java / Python skills
- Proficient in SQL
- Expert understanding of concurrency and partitioning in a data lake house
- Solid understanding of various approaches to data storage
- Knowledge of design patterns and software development best practices with a good understanding of microservices and technologies such as Spring boot.
- Solid understanding of test-driven development and familiarity with best-of-breed tools and technologies
- Experienced in developing and consuming RESTful APIs
- Experience with cloud platforms, preferably Azure
- Experience with data warehouse and data lake concepts and architectures
- Experience with data integration and ELT / ELT tools, preferably Azure Data Factory
- Strong analytical and problem-solving skills
- Excellent communication and teamwork skills
- Experience with delegating & guiding the work of Jr and mid-level Engineers.
Morgan McKinley is acting as an Employment Agency and references to pay rates are indicative.
BY APPLYING FOR THIS ROLE YOU ARE AGREEING TO OUR TERMS OF SERVICE WHICH TOGETHER WITH OUR PRIVACY STATEMENT GOVERN YOUR USE OF MORGAN MCKINLEY SERVICES.