Every financial advisory firm using AI — or whose employees are using AI, whether you know it or not — needs a written AI use policy. FINRA Rule 3110 requires a supervisory system covering all business activities, and SEC Rule 206(4)-7 requires compliance policies and procedures for investment advisers. Here’s what your policy needs to cover and a section-by-section framework you can adapt for your firm.
Why Does Your Financial Advisory Firm Need an AI Use Policy?
An AI use policy is a written document that defines how your firm and its employees can and cannot use artificial intelligence tools in their work. It covers which tools are approved, what data can be entered, what review processes apply, and what happens when someone doesn’t follow the rules.
You need one for two reasons. First, your employees are almost certainly already using AI tools on client work — with or without your knowledge. A policy gives you supervisory control over that activity. Second, regulators expect it.
FINRA’s 2026 Annual Regulatory Oversight Report states that firms should establish “clear policies and procedures to develop, implement, use and monitor GenAI, while maintaining comprehensive documentation throughout.” For broker-dealers, this falls under FINRA Rule 3110’s requirement for a reasonably designed supervisory system. For registered investment advisers, SEC Rule 206(4)-7 requires written compliance policies and procedures.
No policy means no supervisory framework. No supervisory framework means regulatory exposure.
What Should an AI Use Policy for a Financial Firm Cover?
Your policy needs seven sections. Here’s what each should include and why.
Section 1: Scope and Purpose
State clearly who the policy applies to (all employees, contractors, anyone accessing firm systems) and what it covers (all AI and machine learning tools, including consumer tools like ChatGPT, Gemini, Claude, and Copilot).
Key language to include: “This policy applies to all artificial intelligence tools used for firm business, regardless of whether the tool was provisioned by the firm or accessed independently by an employee.”
This sentence closes the shadow AI loophole. Without it, employees can argue that personal use of ChatGPT on a personal device isn’t covered.
Section 2: Approved AI Tools
List every AI tool approved for use at the firm. For each tool, specify:
- What it’s approved for (e.g., drafting internal summaries, not client communications)
- What data can be entered (e.g., anonymised data only, or full client data if the tool is governed)
- Who approved it and when it was last reviewed
Important: If you use a governed AI platform like LaunchLemonade that provides audit trails, data isolation, and configurable guardrails, document why it was approved and what controls it provides. This demonstrates due diligence.
Section 3: Prohibited Uses
Be explicit about what employees cannot do with AI tools. Common prohibitions for financial firms include:
- Entering client personally identifiable information (PII) into unapproved AI tools
- Using AI to generate investment recommendations without human review
- Sending AI-generated client communications without supervisor approval
- Using AI to produce regulatory filings or compliance documents without review
- Sharing proprietary firm strategies or models with AI tools
Don’t write this section in vague language. “Use good judgment” isn’t a policy. “Do not enter client names, account numbers, or portfolio details into any AI tool not listed in Section 2” is a policy.
Section 4: Data Handling Requirements
Define how data flows through approved AI tools:
- Input rules: What categories of data can be entered (public information, anonymised data, full client data with approved tools only)
- Output rules: AI-generated content must be reviewed for accuracy before use. AI-generated client communications must be approved by a designated supervisor.
- Retention rules: How long are AI interaction logs kept? Where are they stored? Who has access?
This section maps directly to your firm’s data governance obligations and helps satisfy FINRA’s documentation requirements.
Section 5: Review and Approval Workflows
Specify the review chain for AI-generated content:
| Content Type | Required Review | Reviewer | Timeline |
|---|---|---|---|
| Internal notes and summaries | Self-review by the creator | Advisor who generated it | Before use |
| Client email drafts | Supervisor review | Designated principal | Before sending |
| Financial analysis or reports | Compliance review | CCO or designee | Before distribution |
| Marketing or social content | Compliance review | CCO or designee | Before publication |
| Regulatory submissions | Compliance + legal review | CCO + legal counsel | Before filing |
The key principle: the higher the client impact, the more eyes before it ships.
Section 6: Training Requirements
State that all employees must complete AI use policy training:
- Within 30 days of the policy being adopted
- Within 14 days of joining the firm
- Annually thereafter, or when the policy is materially updated
Keep training practical. Walk through real scenarios: “Here’s how to use the approved AI tool for meeting prep. Here’s what not to enter. Here’s how to submit content for review.”
Section 7: Violations and Enforcement
Define consequences for policy violations. This doesn’t need to be punitive from day one — many firms start with a grace period for self-reporting — but it needs to exist. Common approaches:
- First violation: Documented discussion with supervisor
- Second violation: Written warning with compliance review
- Serious violation (e.g., entering client SSNs into an unapproved tool): Immediate escalation to compliance and potential disciplinary action
Without enforcement language, the policy is a suggestion.
How Do You Roll Out an AI Use Policy?
Five steps, in order:
Step 1: Audit current AI use. Survey your team on what tools they’re using now. This informs what your policy needs to address and helps you choose approved tools.
Step 2: Choose your approved tools. Select AI tools that meet your governance requirements. Run them through your vendor due diligence process. A governed platform like LaunchLemonade, built for businesses in regulated industries with audit logging and configurable guardrails, simplifies this step significantly.
Step 3: Draft the policy. Use the seven-section framework above. Have your compliance officer and legal counsel review it.
Step 4: Train your team. Don’t just send a PDF. Walk through the policy in a team meeting. Show the approved tools in action. Answer questions.
Step 5: Review quarterly. AI tools and regulations evolve. Schedule quarterly reviews of your policy to ensure it stays current. Document each review.
AI Use Policy: Common Mistakes to Avoid
| Mistake | Why It Fails | What to Do Instead |
|---|---|---|
| “Use AI responsibly” with no specifics | Not enforceable, doesn’t satisfy Rule 3110 | Define specific approved tools, prohibited uses, and review workflows |
| Banning all AI use | Employees will use it anyway, invisibly | Approve governed tools and monitor usage |
| No review workflow defined | AI-generated client content goes out unchecked | Specify who reviews what, and when |
| Policy exists but no training | Nobody reads the policy document | Mandatory training with practical demonstrations |
| Never updating the policy | Policy becomes irrelevant as tools change | Quarterly review cycle, documented |
Frequently Asked Questions
Does every financial advisory firm need an AI use policy?
If any employee at your firm uses AI tools — even consumer tools like ChatGPT on a personal device for work purposes — you need a policy. FINRA Rule 3110 requires supervisory systems covering all business activities. SEC Rule 206(4)-7 requires investment advisers to have written compliance policies. AI use falls under both.
Can we use a template AI use policy or do we need a custom one?
Start with a framework (like the seven sections in this article) and customise it for your firm’s specific tools, workflows, and client base. Generic templates miss firm-specific details that regulators look for, but they’re a reasonable starting point that your compliance officer can adapt.
How long should it take to create an AI use policy?
A small RIA firm can draft a usable policy in one to two weeks if the compliance officer or principal leads the effort. The audit of current AI use (Step 1) typically takes the most time. Don’t let perfect be the enemy of done — a basic policy implemented this month is better than a comprehensive policy that’s still in draft next quarter.
What if our firm doesn’t use AI at all?
If no employee uses any AI tool for any firm business, you may not need a policy today. However, the reality is that many professionals are experimenting with AI tools whether or not the firm has sanctioned it. According to Gartner, 59% of finance leaders were actively using AI by late 2025. An audit (Step 1) often reveals more AI use than leadership expects.
Should our AI use policy cover personal AI use by employees?
It should cover personal AI use when it involves firm business. If an advisor uses ChatGPT on their personal phone to draft a client email, that’s firm business even though it’s a personal device. Your policy should clearly state that the rules apply regardless of what device or account is used.
Need a governed AI platform your policy can actually point to? See how LaunchLemonade works for advisory firms



