As a Cloud Data Engineer, you will design, build, and optimise cloud-based data pipelines for a Microsoft Fabric-powered data lakehouse, supporting large-scale data integration across multiple enterprise systems. You will work closely with architects, analysts, and business teams to ensure data is reliable, secure, and ready for advanced analytics and reporting.
What You’ll Do and How You’ll Succeed
- Design, develop, and maintain cloud-based data pipelines using Microsoft Fabric to ingest, transform, and load data from sources such as SAP IS-U, SCADA, IoT sensors, billing systems, and customer service platforms into a data lakehouse
- Ensure data quality and consistency by applying transformation and validation rules in collaboration with business and data governance teams
- Partner with Cloud Data Architects to optimise data models within the lakehouse for scalability and performance
- Develop ETL and ELT processes to integrate structured, semi-structured, and unstructured data into Azure Data Lake or Azure Synapse, ensuring data is cleaned and effectively stored
- Manage cloud resources to maintain a cost-effective, secure, and scalable architecture, and monitor pipeline performance to ensure reliable operations
- Collaborate with data scientists and data analysts to ensure datasets are accessible, well-documented, and optimised for analytics and reporting using tools such as Power BI
- Ensure compliance with data governance policies, including access control, data encryption, and adherence to the Data Privacy Act of 2012
- Support the design and deployment of cloud infrastructure automation using Azure DevOps, ARM Templates, or Terraform
- Troubleshoot and resolve issues related to data pipeline failures, data integrity, and performance bottlenecks
- Stay current with best practices in cloud data engineering, Azure technologies, and data lakehouse architecture to continuously improve solutions
We’d Love to Hear From You If…
Experience
- 5+ years of experience in cloud data engineering with a focus on Microsoft Azure and data lakehouse architectures
- Experience working with SAP IS-U, SCADA, and IoT data sources, or familiarity with operational data sources in the utilities sector
- Experience with DevOps practices and tools such as Azure DevOps, Git, and CI/CD pipelines
- Experience working in the water utilities sector or other critical infrastructure environments is an advantage
Technical Expertise
- Strong proficiency in Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, Microsoft Fabric, and Power BI
- Solid experience in ETL or ELT processes and tools including Azure Databricks, SQL Server, Python, or Scala
- Expertise in data transformation, data modelling, and data pipeline orchestration
- Familiarity with data governance, metadata management, and data security best practices in regulated environments
- Solid understanding of cloud cost management and resource optimisation in Azure
- Familiarity with data lakehouse patterns such as Delta Lake and advanced analytics concepts such as machine learning and AI is an advantage
- Knowledge of data privacy regulations in the Philippines, including the Data Privacy Act and sector-specific regulations such as MWSS and LWUA is an advantage
- Experience with serverless architecture and containerisation such as Azure Functions and Kubernetes is an advantage
- Familiarity with Python or Spark for large-scale data processing is an advantage
Ways of Working
- Strong problem-solving and troubleshooting skills in complex cloud data environments
- Effective collaboration across cross-functional teams with a focus on meeting business requirements
- Clear communication skills to engage both technical and non-technical stakeholders
- Proactive and self-driven with the ability to manage competing priorities and deliver high-quality outcomes
- Strong attention to detail with a commitment to data integrity and quality