FINRA flags rise of agentic AI, seeks member firms’ feedback

FINRA flags rise of agentic AI, seeks member firms’ feedback


Regulator outlines early use cases for autonomous AI tools, urges governance frameworks while it continues to shape guidance around emerging technology.

The Financial Industry Regulatory Authority is sharpening its focus on “agentic” artificial intelligence as more broker-dealers and RIAs experiment with tools that can plan and execute tasks on their own across multiple systems.

In a post published Tuesday, Greg Ruppert, executive vice president and chief regulatory operations officer at FINRA, sketches out how firms are starting to deploy AI agents and invites member firms to share how they are approaching the technology.

The move builds on FINRA’s Generative AI Member Firm Use Portfolio and its 2026 Annual Regulatory Oversight Report, which highlighted generative AI and cyber-enabled fraud among other priorities over the next year.

Ruppert defines AI agents as systems that can operate autonomously rather than simply responding to prompts within fixed guardrails. “AI agents are systems or programs that can perform and complete tasks autonomously, without human intervention,” he wrote, noting that they can plan, make decisions, and take actions without relying on traditional rules-based programming.

FINRA’s latest observations echo concerns raised in its oversight report about the shifting risk profile as firms move from pilot projects to production use. While many wealth and advisory firms have so far confined generative AI to internal tasks such as document summarization or policy lookups, AI agents are being tested for more complex, higher-stakes work.

Ruppert points to several categories of AI agents that are beginning to surface across member firms. Conversational agents use natural language to interact with staff or clients while pulling data from multiple internal systems. Software development agents can write, test and debug code, and even manage infrastructure tasks with limited human review.

On the risk and surveillance side, firms are exploring agents that can run fraud detection workflows, monitor trading and anti–money laundering alerts, and escalate potential issues faster than traditional processes. Others are looking at agents that orchestrate end-to-end business workflows or even generate trading strategies and execute orders with varying levels of human oversight.

That breadth of use cases is forcing compliance and supervisory teams to think beyond familiar automation questions. Ruppert warns that AI agents may operate with levels of autonomy and opacity that do not fit existing control frameworks.

“Some potential risks associated with AI agents were shared in this year’s Annual Regulatory Oversight Report,” he wrote, citing issues such as agents acting beyond a user’s intended authority, difficulty tracing multi-step decisions, and inadvertent exposure or misuse of sensitive data.

The same generative AI pitfalls that have preoccupied compliance officers – including bias, hallucinations, and privacy concerns – carry over into agentic deployments, with the added complication that misaligned reward systems or shallow domain knowledge can prompt agents to optimize for outcomes that run counter to investor protection. For advisory firms, that raises questions about how far to let agents influence recommendations, trading, or client interactions.

Giving agents unbridled access to systems and data silos across an organization also creates a new potential cyber risk, in which so-called “prompt injection” attacks from external actors could lead to catastrophic leaks of sensitive information.

“I think prompts are going to be the new malware,” CrowdStrike President Michael Sentonas recently told Barron’s in an interview. “These agents have access to systems, they have access to calendars, they have access to email, they have access to data storage … and that scares the living daylights out of me.”

Ruppert suggests that firms with more mature governance and testing around generative models will be better positioned as they experiment with agents. Robust supervision, clear limits on scope and authority, and strong logging and audit capabilities are emerging as baseline expectations, even as formal rule changes have yet to appear.

FINRA is framing the current initiative as a two-way conversation rather than a top-down directive.

“We welcome feedback from firms regarding agentic AI implementations in your organization and encourage you to proactively engage with FINRA as your strategies develop, as noted in Regulatory Notice 24-09,” Ruppert wrote, adding that the ongoing dialogue is meant to shape future guidance and support “responsible deployment of these technologies.”



Content Curated Originally From Here