Regulators Sound Alarm on AI Agent Control Gaps: Governance Takes Center Stage!

Published 10 hours ago4 minute read
Uche Emeka
Uche Emeka
Regulators Sound Alarm on AI Agent Control Gaps: Governance Takes Center Stage!

Australia's financial regulator, the Australian Prudential Regulation Authority (APRA), has issued a stern warning to financial firms regarding the inadequacies in their governance and assurance practices for AI agents. This comes as banks and superannuation trustees rapidly expand their use of artificial intelligence in both internal operations and customer-facing services.

APRA conducted a targeted review of selected large regulated entities in late 2025 to evaluate their AI adoption and associated prudential risks. The review revealed that while AI was being utilized across all entities examined, there was significant variation in the maturity of their risk management and operational resilience frameworks. Boards demonstrated a strong interest in leveraging AI for productivity gains and enhancing customer experience; however, APRA found that many were still in the nascent stages of establishing robust management protocols for AI-related risks.

Key concerns raised by the regulator included an over-reliance on vendor presentations and summaries, indicating that boards were not consistently scrutinizing risks such as unpredictable model behavior and the potential impact of AI failures on critical operations. APRA underscored the necessity for boards to cultivate a deeper understanding of AI to effectively set strategy and ensure coherent oversight. It emphasized that AI strategies must align with an institution’s risk appetite and incorporate comprehensive monitoring mechanisms, along with clearly defined procedures for addressing errors.

The review highlighted diverse applications of AI within regulated entities, including software engineering, claims triage, and loan application processing. Additional use cases cited involved fraud and scam disruption, as well as customer interaction. APRA noted a problematic trend where some entities were treating AI risk with the same frameworks as other technologies, an approach deemed insufficient as it fails to account for the unique characteristics of AI models, such as their emergent behavior and inherent biases. Specific gaps identified included shortcomings in model behavior monitoring, change management processes, and decommissioning protocols. The authority stressed the critical need for comprehensive inventories of AI tools and the assignment of named-person ownership for AI instances. Furthermore, it reiterated the essential requirement for human involvement in all high-risk decision-making processes.

Cybersecurity emerged as another significant area of concern. APRA explained that the adoption of AI is fundamentally altering the threat landscape by introducing new attack vectors, such as prompt injection and insecure integrations. In some instances, identity and access management practices had not been adequately adjusted to accommodate non-human elements like AI agents. The accelerating volume of AI-assisted software development was also placing considerable pressure on existing change and release controls. To mitigate these risks, APRA advised entities to implement stringent controls on agentic and autonomous workflows, encompassing privileged access management, configuration, and patching. It also mandated security testing for AI-generated code.

Vendor dependency was another critical issue. APRA observed that some institutions had become overly reliant on a single provider for a multitude of their AI instances, yet only a limited number could demonstrate a clear exit plan or substitution strategy for their AI suppliers. The regulator also warned that AI could be present in upstream dependencies, often without the entities' awareness, further complicating risk management.

These regulatory warnings coincide with broader industry efforts to address AI authentication challenges. The FIDO Alliance, for instance, has established an Agentic Authentication Technical Working Group dedicated to developing specifications for agent-initiated commerce. FIDO recognized that many existing authentication and authorization models were designed for human interaction, making them unsuitable for delegated actions performed by software agents. The group emphasizes the need for service providers to verify the authorization source and conditions for actions. Vendors, including Google with its Agent Payments Protocol and Mastercard with its Verifiable Intent framework, have presented solutions to FIDO for review.

Complementing these efforts, the Centre for Internet Security (CIS), a non-profit funded predominantly by the Department for Homeland Security, has released AI security companion guides. These guides meticulously map CIS Controls v8.1 to various AI environments, including large language models (LLMs), AI agents, and Model Context Protocol (MCP) environments. The LLM guide provides detailed recommendations on prompt and sensitive-data issues, while the MCP guide focuses on ensuring secure access for software tools, non-human identities, and network interactions, reflecting a holistic approach to securing the evolving AI ecosystem.

Loading...
Loading...
Loading...

You may also like...