As a Data Engineer, you will design and optimise data infrastructure by building scalable pipelines, integrating master data into AWS environments, and ensuring data quality for reporting and analytics. You will work closely with Data Leads and cross-functional teams to deliver reliable, consistent, and actionable data for business decision-making.
What You’ll Do and How You’ll Succeed
- Design, build, and maintain data pipelines to ingest SunCBS master data into AWS Data Hive, ensuring scalability and reliability
- Develop and optimise ETL and ELT workflows using SQL, Python, and AWS-native tools such as S3, Redshift, EC2, Lambda, and Glue
- Collaborate with CSB Data Leads to understand master data structures, mappings, and reporting requirements, translating them into effective data solutions
- Build, validate, and test queries for MVP and Feed Reports to ensure accuracy, performance, and readiness for business use
- Maintain clear and structured documentation of data processes, data dictionaries, and system integrations
- Monitor and troubleshoot data pipelines to ensure high availability and consistent performance
- Support Data Leads in implementing and managing data quality initiatives aligned with organisational standards
- Conduct data profiling to assess data quality and identify issues that impact data reliability
- Execute data cleansing processes to correct inaccuracies and standardise data for consistency
- Monitor and report on data quality metrics to drive continuous improvement
- Implement best practices for data governance, quality, and security in alignment with organisational standards
- Provide support for ad hoc data analysis requests and collaborate with BI and Analytics teams
We’d Love to Hear From You If…
Experience
- 3+ years of experience as a Data Engineer or in a similar role
- Experience with ETL or ELT design patterns, data modelling, and pipeline automation
- Exposure to data governance, data quality, and documentation practices
- Experience working in agile or scrum delivery models is an advantage
Technical Expertise
- Strong proficiency in SQL and Python for data processing and scripting
- Hands-on experience with AWS services including S3, Redshift, EC2, Glue, and Lambda
- Advanced SQL development and performance tuning
- Ability to work with structured and semi-structured datasets
- Familiarity with data visualisation or BI tools such as Looker Studio is an advantage
- Knowledge of CI/CD pipelines for data workflows is an advantage
- Experience with data lake or data warehouse architectures is an advantage
Ways of Working
- Strong analytical mindset with problem-solving capability
- Effective communication and collaboration skills across teams
- Detail-oriented and able to deliver in fast-paced environments
Assignment Details
- Employment Type: Full-Time
- Location: Manila – BGC, Taguig City
- Work Setup: Hybrid / Onsite (as required)