Three AI robots work together in a modern, tech-forward environment, representing broker dealers with vibrant citrus-inspired 3D accents.

What Small Broker Dealers Should Prepare For

FINRA’s 2026 Annual Regulatory Oversight Report includes a dedicated generative AI section for the first time. It doesn’t introduce new rules, but it clarifies what existing rules — particularly Rule 3110 (Supervision) — mean for firms using AI tools. Small broker-dealers need written policies, audit trails, vendor oversight, and procedures to catch AI-generated inaccuracies. Here’s what to prioritise.

What Did FINRA Say About AI in Its 2026 Oversight Report?

FINRA’s 2026 Annual Regulatory Oversight Report added generative AI as a standalone topic for the first time. The report doesn’t create new rules. Instead, it tells firms how existing supervisory obligations apply to AI tools — and signals what examiners will be asking about.

The core message: if your firm uses generative AI, your supervisory system under Rule 3110 must cover that use. This includes AI tools the firm deploys officially and tools employees may be using independently.

FINRA specifically states that firms should establish “clear policies and procedures to develop, implement, use and monitor GenAI, while maintaining comprehensive documentation throughout.”

For small broker-dealers without dedicated compliance teams, this creates a clear to-do list.

What Does FINRA Rule 3110 Require for AI Use?

FINRA Rule 3110 requires every broker-dealer to have a supervisory system that is “reasonably designed” and “tailored to its business.” This isn’t new. What’s new is that FINRA is explicitly saying AI use falls within the scope of what that supervisory system must cover.

In practice, this means three things for small firms:

1. Written procedures for AI.

Your written supervisory procedures (WSPs) need to address how AI tools are selected, approved, used, and monitored. If your WSPs don’t mention AI, there’s a gap.

2. Designated supervisors.

Someone at the firm needs to be responsible for overseeing AI use. At a small broker-dealer, this is likely the same principal who handles other compliance oversight.

3. Documentation.

You need records showing what AI tools are in use, how they’re being used, and what review processes are in place. “We trust our advisors to be careful” isn’t documentation.

What Specific AI Risks Does FINRA Highlight?

The 2026 report calls out several risks that examiners may focus on:

Hallucinations

FINRA explicitly defines AI hallucinations as instances where the model “generates information that is inaccurate or misleading, yet is presented as factual information.” The report notes that this can lead to “misrepresentation or incorrect interpretation of rules, regulations or policies or inaccurate client or market data” that “can impact decision making.”

What this means for your firm: If an advisor uses AI to summarise a regulatory requirement or analyse client data, and the AI produces inaccurate output that influences a client interaction, the firm is responsible. You need procedures to catch these inaccuracies before they reach clients.

Third-Party Vendor Risks

FINRA references Regulatory Notice 21-29, which addresses supervisory obligations for outsourced services. AI tools that handle client data or produce client-facing content are third-party vendors subject to this guidance.

What this means for your firm: You should be conducting due diligence on your AI vendors — understanding how they handle data, what security controls they have, and whether they’re designed for regulated environments.

Data Privacy and Security

The report flags risks around client data being processed by AI tools. This includes data being used for model training, stored insecurely, or accessed by unauthorised parties.

What this means for your firm: Your AI tools need data isolation guarantees. Client data entered into AI should not be used to train models or shared with other users.

What Should Small Broker-Dealers Do Right Now?

Here’s a prioritised action plan, designed for firms with limited compliance resources.

Priority 1: Update Your Written Supervisory Procedures (This Month)

Add an AI section to your WSPs covering:

  • List of approved AI tools
  • Prohibited AI uses (e.g., no client PII in unapproved tools)
  • Review requirements for AI-generated content
  • Designated supervisor for AI oversight

This doesn’t need to be a 50-page document. A clear, specific two-page addition to your existing WSPs demonstrates supervisory intent.

Priority 2: Audit Current AI Use (This Month)

Ask every employee what AI tools they’re using for firm business. Include consumer tools like ChatGPT, Gemini, and Copilot. You need an accurate picture before you can supervise effectively.

Priority 3: Choose a Governed AI Platform (This Quarter)

Replace ad-hoc consumer AI use with a platform designed for regulated businesses. Look for:

  • Audit trails of all AI interactions
  • Data isolation (client data never used for model training)
  • Configurable guardrails (control what AI can and can’t do)
  • Human review workflows before client-facing output

Governed AI platforms like LaunchLemonade are built for exactly this use case — giving firms AI tools that work within regulatory requirements, with full logging and review controls.

Priority 4: Train Your Team (This Quarter)

Hold a training session covering:

  • What tools are approved and what’s prohibited
  • How to review AI-generated content for accuracy
  • How to report potential AI-related issues
  • What the consequences of policy violations are

Document the training — date, attendees, topics covered. Examiners look for this.

Priority 5: Establish Ongoing Monitoring (Ongoing)

Set a quarterly review cadence:

  • Are employees following the AI use policy?
  • Have any new AI tools been adopted without approval?
  • Are review workflows being followed?
  • Have there been any incidents involving AI-generated content?

Document each review cycle. This demonstrates ongoing supervision, not just a one-time compliance exercise.

FINRA AI Readiness: Self-Assessment for Small Firms

Requirement Ready In Progress Not Started
Written supervisory procedures cover AI use
Approved AI tools list documented
Prohibited AI uses clearly defined
AI vendor due diligence completed
Human review workflow for AI-generated client content
Audit trail for AI interactions available
Employee AI training completed and documented
Quarterly review process established
Data handling requirements for AI documented
Designated AI supervisor identified

Scoring: 8–10 items ready = well-positioned. 5–7 = gaps to address this quarter. Under 5 = prioritise immediately. Scoring: 8–

What Happens If You Don’t Prepare?

FINRA examiners are asking about AI. When they visit your firm, expect questions like:

  • “What AI tools does your firm use?”
  • “Do you have written procedures governing AI use?”
  • “How do you ensure AI-generated client communications are accurate?”
  • “What due diligence did you conduct on your AI vendors?”

If you can’t answer these clearly and point to documentation, you have a finding. Findings lead to deficiency letters, potential fines, and reputational damage — outcomes no small firm can afford.

The firms that prepare now will spend 10–20 hours getting their house in order. The firms that don’t will spend significantly more when an examiner flags the gap.

Frequently Asked Questions

Are FINRA’s 2026 AI provisions new rules or guidance?

They’re guidance, not new rules. The 2026 Annual Regulatory Oversight Report explains how existing rules — particularly Rule 3110 (Supervision) — apply to generative AI use. It also references Regulatory Notices 21-29, 24-09, and 24-10. While it doesn’t create new legal requirements, it signals what examiners will focus on, which has practical force.

Does this apply to RIA firms or only broker-dealers?

FINRA’s oversight applies to broker-dealers. However, SEC-registered investment advisers face parallel requirements under SEC Rule 206(4)-7, which requires written compliance policies and procedures. Many firms are dually registered and subject to both. The practical steps — written policies, vendor due diligence, review workflows — apply regardless of registration type.

How detailed do our written supervisory procedures need to be?

Detailed enough that an examiner can understand how your firm supervises AI use. FINRA looks for procedures that are “tailored to [the firm’s] business” — not generic boilerplate. For a small firm, this be 2–3 pages covering approved tools, prohibited uses, review workflows, and training requirements. Specificity matters more than length.

What if our firm doesn’t use AI at all?

If no employee uses any AI tool for any firm business, you may not need AI-specific procedures today. However, FINRA’s report acknowledges that AI adoption is growing rapidly. Having a simple policy stating that AI tools are not approved for firm use — and that any future adoption requires compliance review — is a low-effort safeguard.

Can we use the same AI governance framework as larger firms?

The principles are the same — written policies, vendor oversight, review workflows, documentation. But the implementation should be proportional to your firm’s size and complexity. A 10-person RIA doesn’t need the same infrastructure as a major wirehoused. Focus on clear policies, approved tools, and documented review processes rather than enterprise-grade compliance platforms.

Need an AI platform built for FINRA-regulated firms? See how LaunchLemonade handles compliance →

More Posts

The zesty platform for building, sharing, and monetizing AI agents that actually convert prospects into revenue.

Fresh‑pressed updates

Get zesty AI insights and revenue-generating strategies delivered weekly.

Copyright © 2025 LaunchLemonade. All Rights Reserved.