JOB DESCRIPTION
- Design data architecture and implement enterprise data platform across ingestion, storage, transformation, serving, and consumption layers.
- Define / drive standards, and implement data modelling, metadata, lineage, ownership, security, quality, governance, and compliance (PDPA).
- Drive enterprise data governance outcomes across the company by aligning business, DPO, and IT stakeholders around policies, data standards, data protection requirements, consistent governance processes, data ownership, lifecycle, and quality.
- Work with business owners and data stewards to define business metadata and critical data elements. Maintain glossaries, lineage, classification schemas, and domain structures. Maintain repository of enterprise data assets and domains.
- Coordinate, schedule, and run the Data Governance Council (DGC) meetings and working sessions.
- Support audits, assessments, and remediation actions.
- Collaborate with business stakeholders or cross functional team to define problems, identify key metrics, and present data-driven recommendations.
- Contribute hands-on to key data platform, integration, and data engineering work, especially in foundational and high-impact areas.
- Design scalable data integration patterns across enterprise applications, operational systems, industrial systems, and analytics environments.
- Design, develop, and maintain trusted, reusable, and efficient data pipelines, curated datasets, and enterprise data assets.
- Partner with AI Engineers to ensure the data platform supports AI/ML use cases, including historical data retention, feature-ready datasets, trusted training data, reproducibility, and consistency between training and production usage.
- Work closely with engineering and infrastructure teams to ensure solutions are practical, scalable, secure, and supportable.
- Provide insights to business units through advanced analytics and visualization.
- Contribute to roadmap planning, architecture governance, technology evaluation, and continuous improvement of the enterprise data platform.
JOB REQUIREMENTS
- Bachelor’s degree in Computer Science, Information Technology, Computer Engineering, Data Engineering, Information Systems, or a related discipline. Equivalent practical experience will also be considered.
- Minimum 8 years of experience in data architecture, data engineering, data platform development, or similar enterprise data roles. 5+ years of experience in data governance, data management, or data protection roles.
- Strong knowledge of data cataloging, metadata, lineage, classification, and data quality.
- Understanding of IT security and data protection controls.
- Experience working with ERP, operational, industrial, or engineering data sources is advantageous.
- Exposure to data foundations supporting AI/ML, predictive analytics, or intelligent automation is beneficial.
- Strong expertise in data or platform architecture, lead data engineering, or other senior hands‑on technical roles.
- Proven experience designing enterprise‑scale data platforms, integration patterns, and analytical data structures.
- Hands‑on experience building or guiding implementation of data pipelines, platform components, and data engineering solutions.
- Solid understanding of enterprise data architecture, including data warehouse, data lake/lakehouse, dimensional modeling, metadata, lineage, governance, and security.
- Familiarity with Microsoft Purview or similar tools. Understanding of IT security and data protection controls.
- Certifications such as DCAM, DAMA CDMP, or IAPP are advantageous.
- Strong knowledge of data engineering and integration practices such as ETL/ELT, batch processing, CDC, APIs, event‑driven integration, schema evolution, observability, idempotency, and reprocessing.
- Good understanding of data platform requirements for AI/ML workflows, including historical data management, feature engineering, training data preparation, and reproducibility.
- Strong communication and stakeholder management skills, with the ability to collaborate across architecture, engineering, infrastructure, analytics, and AI teams.
- Ability to manage multiple projects and deliverables effectively in a dynamic environment.
- Experience with data warehousing platforms such as Snowflake, BigQuery, Redshift, or Synapse.
- Experience with data lake/lakehouse ecosystems such as Databricks, Delta Lake, Apache Iceberg, or Hudi.
- Experience with data integration/orchestration tools such as Airflow, Dagster, dbt, Kafka, or Spark.
- Familiarity with enterprise databases and integration technologies such as SQL Server, Oracle, PostgreSQL, MySQL, SAP HANA, CDC tools, and API/event‑based integration.
- Experience working with cloud or hybrid environments such as AWS, Azure, or GCP.
- Exposure to data governance, cataloging, lineage, and data quality tools.
Location:
Seatrium (SG) Pte. Ltd.
Pioneer Yard
50 Gul Road, Singapore 629351
(Islandwide transport provided)
Working Hours:
Monday – Thursday: 8:00am – 5.15pm
Friday: 8:00am – 4.30pm
Interested candidates are invited to send us an updated resume with your current and expected salary and earliest availability.
We regret that only shortlisted candidates will be notified.
Please note that your personal data disclosed to Seatrium Limited and our group of companies, shall be used for the purposes of evaluation, and processing in accordance with our recruitment processes and policies. By providing your personal data, you have consented to the aforesaid purpose under the provisions of the Personal Data Protection Act 2012.
BUSINESS UNIT
Seatrium Limited