Design and build end‑to‑end data pipelines, including data collection, transformation, quality controls, and integration layers, to support enterprise data and analytics solutions.
Partner with solution design and business teams to clarify data needs and assemble large, complex datasets that meet functional and technical requirements.
Develop and continuously refine analytics solutions that align with both business objectives and technical constraints.
Work alongside the data architect to ensure consistency, integrity, and governance of data models across systems.
Build and maintain scalable infrastructure for efficient extraction, transformation, and loading (ETL/ELT) of data from diverse sources.
Develop data tools and utilities that help analytics and data science teams build, test, and optimise their models and analytical workflows.
Collaborate with DevOps engineers to support stable, reliable operation of data and analytics platforms and services.
Partner with data analysts and data scientists to design and implement APIs that connect data layers with downstream applications and models.
Requirements:
Bachelor's or Master's degree in a relevant discipline such as computer science, information technology, or data science.
Minimum 3 years of hands‑on experience with SQL/PostgreSQL, data integration, and BI solutions, including integration with third‑party tools.
Experience with application server environments (e.g. ERP), as well as big‑data tech such as Spark, Scala, Python, SQL scripting, relational databases, and NoSQL platforms (e.g. HBase, MongoDB, Cassandra).
Solid exposure to cloud technologies (e.g. Azure), including Data Lake, Databricks, data factories, and BI dashboarding tools.
Proven track record of working with large and complex datasets and building full‑lifecycle data pipelines on on‑premise or cloud data platforms.
Practical experience coding in data management, data warehousing, or unstructured data environments.
Experience using AI‑assisted coding tools (e.g. GitHub Copilot) to enhance development speed and code quality.
Familiarity with modern GenAI concepts (e.g. RAG), orchestration frameworks such as LangChain and LlamaIndex, and vector databases, to support collaboration with data scientists and ML engineers on AI use cases.
Demonstrated experience with Azure cloud platforms and related technologies.
Ability to define and implement robust data integration patterns and end‑to‑end pipelines.
Strong understanding of data modelling, data warehousing, and BI concepts, with an emphasis on reusability and consistency.
Self‑motivated, comfortable working independently, and highly detail‑oriented.
Curious and proactive about emerging AI and data‑engineering trends and their practical application in enterprise environments.
Now Hiring: Data & AI Engineer in Hong Kong (JN -032026-1999091)-Morgan McKinley