CBUAE sets AI governance expectations as generative AI in UAE finance surges 166% - boards stay accountable even for vendor-supplied AI.
- The CBUAE published non-binding AI governance guidance on 23 February 2026, setting clear supervisory expectations for all licensed financial institutions across the UAE.
- Five core principles underpin the framework: governance and accountability, fairness and non-discrimination, transparency and explainability, effective human oversight, and data management and privacy.
- A DFSA survey found generative AI adoption among DIFC financial firms surged 166% in a single year, rising from 18% to 48% of institutions between 2024 and 2025.
- Licensed financial institutions remain fully accountable for all AI outcomes, including where AI systems are sourced from third-party vendors or cloud providers.
- Three human oversight models are defined, with fully autonomous AI limited to low-risk, non-material processes - and meaningful human oversight required for high-impact consumer decisions.
- Financial advisors using AI for client profiling, risk assessment, or product recommendations should review their governance frameworks against the new expectations as a priority.
Machine Learning Oversight Enters the Regulatory Mainstream in UAE Finance
The rapid deployment of machine learning across UAE financial services has moved from a technology initiative to a board-level governance responsibility. On 23 February 2026, the Central Bank of the UAE (CBUAE) published formal AI governance guidance for licensed financial institutions (LFIs) operating across the UAE. Although non-binding in legal force, the guidance establishes clear regulatory expectations around AI governance, consumer protection, transparency, and accountability.
That supervisory shift reflects urgent market conditions. The Dubai Financial Services Authority (DFSA) found that generative AI adoption surged 166% among DIFC financial firms in a single year, reaching 48% of institutions by mid-2025. Against that backdrop, the CBUAE guidance aligns with the UAE National AI Strategy 2031, signalling that responsible AI adoption is a cross-sector regulatory priority for all CBUAE-supervised institutions.
What the CBUAE Guidance Sets Out
In scope is any AI or machine learning system deployed by an LFI that processes data, generates predictions, or influences decisions affecting customers. The guidance sets out five core principles: governance and accountability, fairness and non-discrimination, transparency and explainability, effective human oversight, and data management and privacy. Together, these form a proportionate, risk-based framework that scales governance requirements to the complexity and potential consumer impact of each AI deployment.
On governance and accountability, the guidance allocates clear ownership to boards and senior management. Institutions must document their AI governance frameworks, integrate AI risk into enterprise-wide risk management, and maintain a comprehensive inventory of all AI models in use. That board-level accountability requirement marks a significant shift: AI governance can no longer be delegated entirely to technology or data science divisions.
The fairness and non-discrimination principle requires that AI systems must not produce discriminatory or manipulative outcomes, whether direct or indirect. Training data must accurately represent the populations being served, and bias testing must be conducted at least annually and after any material model change. CBUAE Governor Khaled Mohamed Balama described the guidance as building "a clear framework for the responsible use of AI and machine learning" that "emphasises human oversight and data protection requirements."
How Human Oversight Must Work
Central to the guidance is its framework for human oversight, particularly for decisions that carry significant implications for consumers. The CBUAE defines three oversight models, each calibrated to the risk level of the AI deployment. At the highest-oversight end, the "human in the loop" model requires a human decision-maker to approve or reject every AI recommendation before it takes effect.
The "human on the loop" model allows AI to operate autonomously on routine tasks, with human monitors able to review outcomes and intervene when anomalies arise. By contrast, "human out of the loop" - where AI operates without direct human involvement - is expected to be limited to low-risk, non-material processes only. Fully automated credit decisions or insurance approvals without any possibility of human review are unlikely to meet supervisory expectations.
Also significant is the concept of "high-impact decisions" - any AI-driven determination that materially affects a customer's access to financial products or services. Credit approvals, pricing decisions, and insurance underwriting all fall within this definition. For these applications, meaningful human oversight is required, and customers must have access to review mechanisms and explanations of how AI influenced the outcome.
Third-Party AI Systems and the Accountability Principle
One of the most consequential aspects of the guidance is its treatment of third-party AI systems. Licensed financial institutions remain fully accountable for all AI outcomes, even when systems are built, maintained, or hosted by external vendors. This mirrors the CBUAE's broader consumer protection posture, which has also recently tightened risk profiling requirements for financial advisors. Accountability, in the CBUAE's view, cannot be outsourced.
In practice, institutions must conduct thorough due diligence on any AI vendor's governance, security, and data protection practices before deployment. Contractual arrangements should cover performance requirements, audit and information rights, data protection obligations, and the ability to suspend or terminate systems when problems arise. Existing vendor contracts should also be reviewed against these expectations.
Pinsent Masons' fintech law expert Marie Chowdhry noted that the guidance makes AI use in financial services "fundamentally a consumer protection and conduct risk issue." She added that AI-driven decisions are expected to meet "the same standards of fairness and acting in consumers' best interests as traditional processes." That principle applies equally to AI sourced from third-party vendors.
Surging AI Adoption Underscores the Governance Gap
Recent survey data illustrates how rapidly AI is being deployed across UAE financial services. The DFSA's 2025 AI Survey - capturing responses from 661 authorised DIFC firms, representing an 88% participation rate - found that 52% of firms are actively using AI, up from 33% a year earlier. Generative AI drove the sharpest growth, surging 166% to reach 48% of firms over the same period.
Governance has not kept pace with deployment. The same survey found that 21% of firms lacked clear accountability or oversight mechanisms, even where AI is critical to their operations. More concerning, 26% of firms using AI in business-critical areas had no governance framework in place at all. The CBUAE guidance is designed to close this gap before adoption accelerates further.
Regulatory uncertainty has also slowed structured deployment, with more than 38% of DFSA firms citing unclear regulatory expectations as their biggest adoption barrier. By setting proportionate expectations rather than prescriptive rules, the CBUAE has adopted a measured approach. Legal commentators have described this as a risk-based regulatory model well-suited to the UAE's diverse financial sector.
What Financial Advisors and Compliance Teams Need to Review Now
For financial advisors and compliance teams, the guidance creates immediate review obligations. Any firm using AI tools for client profiling, risk assessment, suitability analysis, or product recommendations should audit its governance framework against the five principles the guidance sets out. In particular, advisors should assess whether their AI processes include adequate transparency disclosures, meaningful human oversight for client-facing decisions, and documented bias testing.
AI-assisted know your customer (KYC) tools rank among the highest-risk applications under the guidance. Advisors using third-party KYC or client profiling platforms must verify that vendor governance aligns with CBUAE expectations and that contracts include appropriate audit and oversight rights. The UAE's 2026 capital markets regulatory overhaul reinforces the broader message: across the sector, governance is now a core compliance obligation for every licensed advisory firm.
The bias testing requirement is also directly relevant to advisors using AI for portfolio analysis or product recommendations. The guidance requires at least annual testing - and retesting after any material model change - using representative training data. For firms relying on AI-assisted research or automated recommendation tools, this means ensuring documented validation and review by an accountable individual within the governance chain.
What Clients are Asking their Advisors
Is the CBUAE AI guidance legally binding on UAE financial institutions?
The guidance is non-binding in legal force, but it carries significant regulatory weight. The CBUAE is expected to apply its principles during supervisory reviews and prudential assessments. Institutions departing materially from the guidance risk elevated scrutiny during regulatory examinations.
Which UAE financial institutions does the CBUAE AI guidance apply to?
The guidance applies to all licensed financial institutions (LFIs) supervised by the CBUAE, including banks, insurance companies, finance companies, exchange houses, and payment service providers in mainland UAE. It does not cover firms regulated by the DFSA in the DIFC or the FSRA in ADGM, which operate under their own regulatory frameworks.
Do UAE financial firms remain accountable if a third-party AI system causes harm?
Yes. The guidance makes clear that accountability cannot be outsourced. If a vendor-supplied AI system produces unfair or non-compliant outcomes, the licensed financial institution deploying it remains fully responsible - not the vendor. Institutions should conduct due diligence on vendor governance and ensure contracts include appropriate audit and oversight rights.
How does the CBUAE define human oversight of AI in financial services?
The guidance defines three oversight models: "human in the loop" requires human approval of each AI decision; "human on the loop" allows autonomous AI operation with human monitoring and intervention capability; and "human out of the loop" is limited to low-risk, non-material processes only. High-impact consumer decisions must retain meaningful human involvement.
Further Reading
CBUAE Responsible AI Guidance: Pinsent Masons AnalysisDFSA Artificial Intelligence Survey 2025
CBUAE Guidance Note: Full Text on the CBUAE Rulebook
Dubai's Independent Financial Advisory Boom: What Expats Need to Know in 2026