How Agentic AI Will Reshape Payments (IMF)
An IMF paper by Sonja Davidovic and Hervé Tourpe argues that agentic artificial intelligence (AI) is poised to fundamentally alter payment system architecture by shifting transaction initiation from human instructions to machine decisions. The note applies a three-layer framework of intent, authorization, and settlement to identify where agentic AI can add efficiency while preserving control and transaction finality. The paper finds that the core tension between AI’s probabilistic decision-making and the deterministic requirements of payment infrastructure raises unresolved questions about authorization integrity, compliance traceability, and systemic risk concentration. Governance and institutional design — rather than technology alone — are identified as the primary determinants of whether outcomes are stabilizing or destabilizing. The principal open question is how existing legal and regulatory frameworks, which presuppose human-initiated transactions, can be adapted to assign accountability when autonomous agents are the transacting parties. [IMF]

I think that this paper is significant because it is the first one of its kind, focusing outwardly to risks posed by wide-scale agentic AI usage, that I’ve come across, from an official institution. However, there are a few that come close, albeit not from official institutions:
AI Agents in Action: Foundations for Evaluation and Governance (WEF)
In November 2025 the World Economic Forum (WEF), set out a framework for deploying large language model-based AI agents in organizations, emphasizing structured evaluation and progressive governance. The paper argues that agent architectures now combine application, orchestration and reasoning layers, connected via interoperability protocols that enable powerful multi-agent workflows but expand the cyber attack surface. It proposes classifying agents by function, role, predictability, autonomy, authority and operational context to link technical design choices to risk assessment and safeguards, supporting proportionate controls and clearer allocation of responsibility between providers and adopters. [WEF]
Model AI Governance Framework for Agentic AI (IMDA)
Singapore’s Infocomm Media Development Authority (IMDA) published a model governance framework for agentic AI that prescribes how organizations should bound risks, assign accountability, and embed controls across the lifecycle of autonomous language‑model‑based agents. It defines agents as multi‑step planners with tools, memory, and protocols, emphasizes risk from real‑world action, sensitive data access, and complex multi‑agent dynamics, and analyzes new system‑level failure modes. The framework then articulates four pillars—upfront risk/use‑case assessment, human accountability and oversight, lifecycle technical and security controls, and end‑user responsibility through transparency and training—as a reference architecture for enterprise deployment and prospective regulation. It positions these as adaptable building blocks rather than fixed rules, and flags open questions around agent identity, dynamic authorization, evaluation methods, and cross‑border accountability, inviting feedback and case studies to refine future iterations. [IMDA]
Systemic Risks Associated with Agentic AI (ACM)
The Association for Computing Machinery (ACM) published a policy brief arguing that current European Union (EU) AI Act provisions only partially address systemic risks from highly autonomous agentic AI systems. It describes agents capable of self-directed, tool-using operation and multi-agent coordination, stressing risks of loss of human control, opacity, economic disruption, and malicious uses in cyberattacks, disinformation, and market manipulation. The brief proposes shifting from static, product-focused rules to dynamic governance with lifecycle monitoring, multi-agent testing, strengthened cybersecurity, and potential new categories for macroeconomic and systemic risks. It highlights unresolved questions on alignment oversight, liability allocation, and integrating fiscal, competition, and labor policy with AI regulation. [ACM]
Granted, the BIS has published a paper on a very specific payments-focused agentic AI use case, I’m looking for papers that focus on policy issues around agentic AI governance and risk mitigation.
FYI I produce a monthly digest of digital fiat currency (DFC) developments exclusively for the official sector (e.g., central banks, ministries of finance and international financial institution (e.g., the BIS, IMF, OECD, World Bank)) plus academics and firms that are active in the DFC space (commercial banks, technology providers, consultants, etc.). (DFCs include central bank digital currency (CBDC), stablecoins and tokenized deposits.) It goes out via email on the first business day of every month, and if you’re interested in being on the mailing list, please email me at john@kiffmeister.com.
