Methodology
Assessment framework · version 1.0 · April 2026
Framework Overview
The AI Governance Readiness Assessment is a structured self-assessment tool designed for Singapore-regulated financial institutions, with a particular focus on private banking and wealth management. It evaluates an institution's readiness against the complete landscape of AI governance expectations applicable in Singapore as of early 2026.
The assessment is built on a register of 193 discrete requirements derived from 13 source instruments spanning the full hierarchy of regulatory authority — from MAS supervisory guidelines currently in force, through proposed consultation papers, to industry methodologies and international assurance standards. Each requirement is classified by its regulatory tier (reflecting the authority and enforceability of the source instrument), assigned to one of seven thematic domains, and rated by severity (reflecting the potential impact of non-compliance).
The framework deliberately covers instruments at different stages of regulatory maturity. This forward-looking approach enables institutions to assess not only their compliance with current binding expectations, but also their preparedness for requirements that are likely to crystallise as consultation papers are finalised and industry practices become supervisory expectations. In Singapore's principles-based regulatory environment, the distinction between “guidance” and “expectation” is often nuanced, and institutions that wait for formal binding requirements before acting typically find themselves behind supervisory expectations.
The assessment produces a maturity level (from Critical Gaps through Established), a domain-by-domain breakdown of compliance status, and identification of specific gaps with prioritised remediation guidance. All processing occurs client-side in the browser — no assessment data is transmitted to or stored on any server, ensuring confidentiality of sensitive governance information.
Regulatory Tier Classification
The framework classifies each source instrument into one of six regulatory tiers, ordered by decreasing authority and enforceability. The tier classification determines the weight given to requirements in the overall maturity assessment and informs the severity assignment of individual requirements. Understanding the tier hierarchy is essential for interpreting assessment results, as a gap against a SUPERVISORY tier requirement carries different implications than a gap against a METHODOLOGY tier requirement.
Legally binding MAS Notices issued under the relevant Acts (e.g., Banking Act, Securities and Futures Act). Non-compliance may result in regulatory action, fines, or licence conditions.
Source instruments: Currently no AI-specific STATUTORY requirements. This tier is included for completeness and forward compatibility, as MAS may issue binding Notices on AI use in future.
MAS Guidelines and Circulars that set out supervisory expectations. While not legally binding per se, institutions are expected to comply and MAS will assess adherence during inspections and thematic reviews.
Source instruments: TRM Guidelines (January 2021), Fair Dealing Guidelines (May 2024), CMG-G02 Digital Advisory Guidelines, Outsourcing Guidelines for Banks (December 2023).
Good practices observed by MAS through thematic reviews, inspections, and industry engagement. Published as Information Papers, these describe what MAS has seen leading institutions do, without prescribing specific requirements.
Source instruments: MAS Information Paper on AI Model Risk Management (December 2024).
Proposed guidance currently in public consultation. These represent MAS's likely direction of travel and institutions should begin planning for compliance, though final requirements may differ from consultation drafts.
Source instruments: P017 – Proposed Guidelines on AI Risk Management (consultation closed January 2026), P004 – Proposed Guidelines on Third-Party Risk Management (consultation open until April 2026).
Industry methodologies, frameworks, and guidance developed by regulatory-adjacent bodies, industry consortiums, or standards organisations. These provide practical implementation approaches that institutions can adopt or adapt.
Source instruments: FEAT Principles (2018), Veritas Assessment Methodology, MindForge Operational Handbook & Implementation Examples (January 2026), IMDA Model AI Governance Framework for Agentic AI, PDPA Advisory Guidelines (March 2024).
External assurance and certification standards that provide independent verification of AI governance maturity. Requirements within these standards use mandatory language, but adoption of the standard itself is voluntary.
Source instruments: ISO/IEC 42001:2023 – AI Management System.
Seven Assessment Domains
The 193 requirements are organised into seven thematic domains. Each domain represents a distinct area of AI governance, though there are natural interdependencies between domains (for example, Model Governance requirements depend on the institutional governance structures defined in Domain 7, and Outsourcing requirements interact with the operational resilience controls in Domain 6). The domain structure follows the logical organisation of P017 as the most comprehensive single source, adapted to accommodate requirements from other instruments that do not fit neatly into P017's structure.
Domain 1: Model Governance & Validation
115 REQUIREMENTSThis domain covers the full AI model lifecycle from initial identification and risk classification through development, validation, deployment, ongoing monitoring, and eventual decommissioning. It addresses governance structure and committee oversight for AI/ML models, the capability and independence of validation functions, controls around third-party and vendor AI models, and specific safeguards for generative AI and large language model deployments. Requirements are drawn primarily from P017 (which provides the most comprehensive AI lifecycle framework), the MAS AI MRM Information Paper (observed leading practices), MindForge (practical implementation guidance), and ISO/IEC 42001 (management system controls). This is the largest domain, reflecting the depth of emerging regulatory expectations around AI model risk management.
Primary sources: P017, MAS AI MRM Information Paper, MindForge Ops Handbook, ISO/IEC 42001, TRM Guidelines.
Domain 2: Data Governance & Privacy
15 REQUIREMENTSThis domain addresses the management of personal data within AI systems, including consent and notification requirements, data protection impact assessments, privacy-by-design principles, data anonymisation and pseudonymisation techniques, bias assessment of training data, and governance of third-party data processing. Requirements integrate obligations from the PDPA Advisory Guidelines with AI-specific data governance expectations from P017 and the MindForge framework. The domain recognises that robust data governance is foundational to responsible AI, as model outputs are only as reliable as the data used for training and inference.
Primary sources: PDPA Advisory Guidelines, P017, MindForge Ops Handbook.
Domain 3: Client-Facing AI & Suitability
14 REQUIREMENTSThis domain covers AI systems that directly interact with clients or influence client outcomes, including robo-advisory platforms, AI-driven recommendation engines, chatbots providing financial guidance, and automated suitability assessments. It addresses algorithm governance for client-facing systems, suitability of AI-generated advice, fair dealing obligations in AI-mediated interactions, customer transparency and disclosure requirements, and mechanisms for client redress when AI systems produce adverse outcomes. Requirements draw heavily on the CMG-G02 Digital Advisory Guidelines and Fair Dealing Guidelines, supplemented by client-facing provisions in P017.
Primary sources: CMG-G02 Digital Advisory Guidelines, Fair Dealing Guidelines, P017.
Domain 4: Explainability & Fairness
11 REQUIREMENTSThis domain addresses responsible AI principles including the definition and measurement of fairness across protected attributes, bias detection and mitigation methodologies, explainability requirements for different model types and use cases, and emerging governance challenges around agentic AI systems. Requirements are sourced primarily from the FEAT Principles (which established Singapore's foundational approach to fairness, ethics, accountability, and transparency in AI), the Veritas Assessment Methodology (which operationalises FEAT into measurable criteria), and the IMDA Agentic AI Framework (which extends governance to autonomous AI systems). This domain recognises that explainability and fairness requirements must be proportionate to the materiality and complexity of the AI use case.
Primary sources: FEAT Principles, Veritas Assessment Methodology, IMDA Agentic AI Framework, P017.
Domain 5: Outsourcing & Third-Party AI
12 REQUIREMENTSThis domain covers the use of AI systems, models, and services from external providers, including cloud-hosted AI platforms, vendor model APIs, and outsourced AI development. It addresses the governance framework for third-party AI, due diligence and vendor assessment requirements, outsourcing agreement provisions specific to AI, concentration risk from dependency on a small number of AI providers, and lifecycle management of third-party AI models. Requirements draw on the existing MAS Outsourcing Guidelines (which apply to AI outsourcing arrangements), supplemented by third-party AI provisions in P017 and emerging requirements from P004 (the proposed Third-Party Risk Management guidelines currently in consultation).
Primary sources: Outsourcing Guidelines for Banks, P017, P004, MindForge Ops Handbook.
Domain 6: Operational Resilience & Cybersecurity
12 REQUIREMENTSThis domain addresses the technology infrastructure and security controls supporting AI systems, including IT governance and change management for AI deployments, access control and privilege management for AI systems and data, audit logging and monitoring of AI system behaviour, incident management and response procedures for AI failures, and AI-specific cybersecurity threats such as adversarial attacks, data poisoning, and model extraction. Requirements are drawn primarily from the TRM Guidelines (which establish baseline technology risk management expectations) supplemented by AI-specific operational resilience requirements from P017 and MindForge. This domain recognises that AI systems introduce novel operational risks that extend beyond traditional IT risk management frameworks.
Primary sources: TRM Guidelines, P017, MindForge Ops Handbook.
Domain 7: Governance Structure & Accountability
14 REQUIREMENTSThis domain covers the institutional governance architecture required to manage AI risk effectively, including the establishment of an AI management system with clear roles, responsibilities, and escalation paths, the definition of AI risk appetite and tolerance levels aligned with the institution's overall risk framework, the fostering of an AI risk culture that embeds responsible AI principles across the organisation, the design of an operating model that integrates AI governance with existing risk management and compliance functions, and the development of AI skills, knowledge, and capability at board, senior management, and operational levels. Requirements draw on ISO/IEC 42001 (which provides the most structured approach to AI management systems), P017 (governance structure requirements), and cross-cutting accountability provisions from multiple instruments.
Primary sources: ISO/IEC 42001, P017, MindForge Ops Handbook, TRM Guidelines.
Source Instruments
The requirement register is derived from 13 source instruments, each contributing a different number of requirements depending on the instrument's scope and specificity. The table below lists all source instruments, their issuing authority, current status, regulatory tier, and the number of unique requirements derived from each. Where requirements overlap across instruments, the requirement is attributed to the highest-tier source and cross-referenced to supporting instruments.
| Instrument | Issuing Authority | Status | Tier | Requirements Derived |
|---|---|---|---|---|
| P017 – Proposed Guidelines on AI Risk Management | MAS | Consultation (closed Jan 2026) | CONSULTATION | 57 |
| MindForge Ops Handbook & Implementation Examples | MAS / Industry | Published Jan 2026 | METHODOLOGY | 32 |
| ISO/IEC 42001:2023 – AI Management System | ISO / IEC | Published | ASSURANCE | 19 |
| FEAT Principles (2018) | MAS | Published | METHODOLOGY | 17 |
| MAS Information Paper on AI Model Risk Management | MAS | Published Dec 2024 | OBSERVED PRACTICE | 16 |
| TRM Guidelines (January 2021) | MAS | In force | SUPERVISORY | 13 |
| PDPA Advisory Guidelines (March 2024) | PDPC | Published Mar 2024 | METHODOLOGY | 11 |
| Outsourcing Guidelines for Banks (December 2023) | MAS | In force | SUPERVISORY | 8 |
| CMG-G02 Digital Advisory Guidelines | MAS | In force | SUPERVISORY | 7 |
| Veritas Assessment Methodology | MAS / Industry | Published | METHODOLOGY | 6 |
| Fair Dealing Guidelines (May 2024) | MAS | In force | SUPERVISORY | 3 |
| IMDA Model AI Governance Framework for Agentic AI | IMDA | Published | METHODOLOGY | 2 |
| P004 – Proposed Guidelines on Third-Party Risk Management | MAS | Consultation (open until Apr 2026) | CONSULTATION | 2 |
| Total | 193 | |||
Note: Some requirements are derived from multiple instruments. The total above represents unique requirements after de-duplication and cross-referencing across all source instruments. Requirement counts may change as consultation documents are finalised and new instruments are published.
Severity Assignment
Each of the 193 requirements in the register is assigned a severity level of HIGH, MEDIUM, or LOW. Severity reflects the potential impact of non-compliance on the institution, its clients, and the broader financial system. The severity assignment considers the regulatory tier of the source instrument, the nature of the risk addressed, and the potential consequences of a gap. Severity is independent of the institution's current compliance status — it is an inherent property of the requirement itself.
The requirement addresses a risk that, if unmitigated, could result in material regulatory action, significant financial loss, systemic harm to clients, or fundamental failure of AI governance. HIGH severity requirements typically derive from SUPERVISORY or CONSULTATION tier instruments and relate to core governance obligations such as board oversight, model validation independence, material risk identification, or client protection safeguards. A gap in a HIGH severity requirement indicates an urgent need for remediation.
The requirement addresses a risk that, if unmitigated, could result in regulatory scrutiny, operational inefficiency, or governance weaknesses that may compound over time. MEDIUM severity requirements typically relate to process maturity, documentation standards, monitoring cadence, or specific technical controls. A gap in a MEDIUM severity requirement indicates a meaningful area for improvement that should be addressed within a planned remediation timeline.
The requirement addresses a risk that represents an area for enhancement rather than a critical gap. LOW severity requirements typically relate to advanced practices, aspirational standards, or refinements to existing controls. A gap in a LOW severity requirement indicates an opportunity for further maturity improvement rather than a compliance concern.
Assessment Logic
The assessment uses a four-option response model for each requirement. This deliberately simple structure balances the need for meaningful differentiation with the practical constraint of self-assessment accuracy. Institutions should select the option that most accurately reflects their current state, erring on the side of conservatism where there is genuine uncertainty about implementation completeness.
Response Options
The institution has fully implemented the requirement. Policies, procedures, and controls are documented, operationalised, and subject to regular review. Evidence of implementation is available.
The institution has partially implemented the requirement. Some elements are in place but there are gaps in coverage, documentation, operationalisation, or review. The institution is aware of the gaps and may have plans to address them.
The institution has not implemented the requirement. There is no evidence of policies, procedures, or controls addressing the requirement, or existing controls are fundamentally inadequate.
The requirement is not applicable to the institution's current AI use cases, business model, or operational context. The institution should be prepared to justify why the requirement is not applicable if queried by supervisors.
Maturity Decision Table
Individual requirement scores are aggregated into an overall maturity level using the following decision table. The maturity level provides a single summary indicator of the institution's AI governance readiness. The decision logic is intentionally conservative: a single unaddressed HIGH severity requirement is sufficient to cap the maturity level, reflecting the principle that critical governance gaps cannot be offset by excellence in other areas.
Any HIGH severity requirement scored as C (Not Implemented), regardless of other scores. This reflects the principle that a single unaddressed critical risk can undermine the entire governance framework.
No HIGH severity requirements scored as C, but more than 40% of applicable requirements scored as B or C. The institution has addressed the most critical risks but has significant gaps across the broader requirement set.
No HIGH severity requirements scored as C, 60–89% of applicable requirements scored as A, and no more than 10% scored as C. The institution has a solid foundation with identifiable areas for improvement.
No requirements scored as C, and 90% or more of applicable requirements scored as A. The institution demonstrates comprehensive AI governance maturity across all domains.
More than 50% of requirements marked as D (Not Applicable). The assessment does not have sufficient data to determine a meaningful maturity level. This typically indicates the institution is at an early stage of AI adoption or the assessment scope was too narrow.
Limitations and Caveats
This assessment tool is designed to provide a useful indicative view of AI governance readiness. Users should be aware of the following limitations when interpreting results:
Self-Assessment Bias
The tool relies on the respondent's self-assessment of compliance status. Self-assessments are inherently subject to optimism bias, knowledge gaps, and inconsistent interpretation of requirements. Results should be validated through independent review or internal audit where material decisions depend on the assessment outcome.
Point-in-Time Snapshot
The assessment captures the institution's governance posture at a single point in time. AI governance is a dynamic and rapidly evolving field. Regulatory expectations, industry practices, and the institution's own AI usage will change over time. Periodic re-assessment is recommended, particularly following material changes in AI deployment, regulatory updates, or organisational restructuring.
Consultation-Stage Requirements
A significant portion of the requirement register (59 of 193 requirements) derives from instruments currently in public consultation (P017 and P004). Final published guidelines may differ materially from consultation drafts. The assessment will be updated as consultation documents are finalised, but users should note that current results reflect proposed rather than final regulatory expectations for these requirements.
Scope Limitations
The assessment is scoped to Singapore-regulated financial institutions, with a focus on private banking and wealth management. Institutions operating across multiple jurisdictions will need to consider additional regulatory requirements not covered by this tool. The assessment does not cover sector-specific requirements outside financial services (e.g., healthcare AI, autonomous vehicles) or Singapore-specific but non-financial-services AI regulations that may apply.
Not Legal or Regulatory Advice
This tool does not constitute legal, regulatory, or compliance advice. Assessment results should not be used as a substitute for professional legal counsel or formal regulatory gap analysis. The requirement register represents the author's interpretation of source instruments and may not reflect the views of MAS, PDPC, IMDA, or any other regulatory body referenced in the methodology.
No Data Retention
The tool processes all assessment data client-side and does not retain or transmit individual assessment responses to any server. PDF reports are generated in the browser. While this protects respondent confidentiality, it means there is no mechanism for longitudinal tracking, benchmarking against peers, or recovery of lost assessment data. Institutions should retain their PDF reports for internal records.