Results-driven Senior Data Consultant with over 7 years of experience in the data space
Skilled in delivering innovative and scalable data solutions across multiple industries
Highly team-oriented, with a proven ability to collaborate and lead teams to meet and exceed deliverables
Experienced in reducing operational costs and optimizing system performance
Recognized for strong problem-solving skills and continuous process improvement
Consistently delivers projects on time, within scope, and with a strong commitment to team success
Overview
8
8
years of professional experience
3
3
Certification
Work History
Senior Data Consultant
Fulcrum Decisions NZ
07.2023 - Current
Worked collaboratively within a team, each member focusing on specific responsibilities such as infrastructure and application development, to integrate Snowflake, Oracle, and MSSQL into the backend of a real-time Order Tracking System. This solution improved operational uptime by at least 20% for a leading office supplies company in New Zealand.
Significantly reduced operational costs in Snowflake cluster by applying best practices for incremental models in DBT for a major account
Engineered a robust Excel Add-in for data extraction with RBAC using GoLang and Snowflake, significantly boosting efficiency for finance and operations teams.
Collaborated with cross-functional teams to implement best data management practices and standardization, driving results-oriented projects.
Utilized Azure stack to achieve data and client requirements for different projects
Senior Big Data Specialist
Paymaya Philippines
02.2023 - 06.2023
Signifcantly contributed in the Data Migration process from Redshift Data Warehouse to AWS DataBricks Lakehouse
Led the development and deployment of ingestion processes in the Bronze and Silver layers using Apache Airflow.
Significantly enhanced data pipeline performance, improving processing times by 3x through optimized incremental loading and ETL processes in AWS Databricks.
Work alongside Data Mesh team to deliver data requirements for the Data Lakehouse system
Cloud Engineer
Deloitte Philippines
04.2022 - 02.2023
Played a key role in the data engineering team for a major winery distribution client that is part of developing dashboards for short-term or long-term supply and logistics planning
Developed ingestion and ETL jobs using AWS services like PySpark, DynamoDB, Redshift, and DBT to streamline data pipelines.
Develop ingestion jobs and corresponding curated layers for raw data consumption in close collaboration with the Infra Team, across both production and UAT environments
Established multiple DBT data models derived from Winery Plantation and Inventory domain data
Deployed multiple scheduling Jobs by via Apache Airflow/AWS-MWAA
Senior Data Engineer
Entrego Philippines
11.2020 - 03.2022
Hired as a Senior Data Engineer to design and implement scalable, reliable, and efficient data pipelines for the Logistics and Delivery Data Warehouse.
Leveraged AWS Stack for end-to-end pipeline delivery.
Managed ingestion and ETL scheduling processes using Apache Airflow to meet SLAs.
Set up CI/CD pipelines with Jenkins for deploying PySpark applications to Airflow.
Led initiatives to improve job performance and optimize costs across the Data Warehouse.
Data Service Engineer
DataSpark Pte. Ltd.
11.2019 - 11.2020
Hired as a Data Analytics Service Engineer to optimize and analyze DataSpark ML products in an on-premise, Hadoop Cluster.
Coordinate with Telco SMEs to understand concepts related to network data related to user, data and SMS
Support and deploy Capex Forecasting ML jobs on Hadoop for UAT and Production environments
Improved SLAs and optimized costs for large-scale data from various source systems through configuration tuning and testing.
Data Engineer
Solvento Philippines
02.2017 - 11.2019
SME Scala Spark developer for a major banking client and a pioneer big data migration project in Manila
Involved with a major banking client to facilitate data migration from an existing RDBMS System towards a Hadoop Big Data Environment
Creation ETL and Ingestion Jobs for RAW Tables towards Data Lake with a Data Vault Architecture
Provide continuous delivery and DevOps best practices being done in the QA, UAT and Production Environment