As a Data Engineer, you will design and build the data infrastructure that powers enterprise AI and analytics use cases. You will be responsible for data pipelines, data quality, and ensuring reliable, governed data flows from source systems through to machine learning models and analytics dashboards in a telecom environment.
What You’ll Do and How You’ll Succeed
- Design and implement batch and streaming data pipelines for telecom data sources such as CDRs, network events, CRM, and billing.
- Build and maintain data lakehouse architecture using bronze, silver, and gold layers on Databricks or similar platforms.
- Implement data quality checks, monitoring, and alerting to improve reliability and trust in the data estate.
- Create reusable data transformations that support ML feature engineering.
- Optimise query performance and storage costs across large-scale telecom datasets, including environments with billions of records.
- Ensure data governance, lineage tracking, and compliance with data privacy regulations, including the Philippine Data Privacy Act.
- Collaborate with data scientists to understand data requirements for new AI use cases.
We’d Love to Hear From You If…
Experience
- You have 4+ years of experience in data engineering.
Technical Expertise
- You are highly proficient in SQL and Python.
- You have experience with Databricks, Apache Spark, and Delta Lake, or equivalent technologies.
- You are proficient in cloud data services such as Azure Data Factory, AWS Glue, or GCP Dataflow.
- Knowledge of streaming technologies such as Kafka, Kinesis, or Event Hubs will be a plus.
- You understand data governance and cataloguing tools such as Unity Catalog, Purview, or Collibra.
Ways of Working
- You take a structured approach to building reliable, governed, and scalable data flows.
- You can work closely with data scientists to translate AI use case requirements into practical data engineering solutions.
Assignment Details
- Location: McKinley, Taguig City
- Work Set-Up: Hybrid