Hybrid | 7 months | Multinational Tech Client
The ideal candidate will play a key role in designing and developing scalable, high-performance machine learning applications and services. The responsibilities will include integrating machine learning models into their existing business workflows to make they processes more efficient and robust. You will lead the development and maintenance of internal reusable libraries to streamline and standardize engineering workflows across the team. Additionally you will be responsible for developing high load ETL processes and services to efficiently handle and process large amounts of data.
Responsibilities:
- Design and develop scalable, high performance machine learning applications and services.
- Architect and implement the integration of machine learning models into existing business workflows.
- Lead the implementation of efficient ETL pipelines to process large volumes of data.
- Drive the creation and maintenance of internal reusable libraries to streamline and standardize engineering workflows.
- Promote and enforce high engineering standards in development, testing, deployment, and monitoring across the team.
- Support the development of data engineers by providing mentoring, coaching, and guidance.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field, with 6+ years of experience in data engineering or a similar role.
- Proven experience in designing and developing ETL pipelines.
- Proficiency in Python programming language.
- Extensive experience with Airflow for workflow management and scheduling.
- Experience with CI/CD processes, including the use of linters and unit tests.
- Proficiency in SQL and NoSQL databases.
- Familiarity with web development libraries like Flask and FastAPI.
- Basic understanding of machine learning models and metrics.
