Your staff is already putting client data into AI. Here's what that actually means.

Your staff is already putting client data into AI. Here's what that actually means.

The shadow AI risk you're already running

Last week I published an article on building AI in the gaps of your practice — the workflows between your core platforms where human effort still dominates. Jason, a fellow practitioner, left a comment that stopped me cold: "What about security? What's actually safe to put into these tools?"

It's the right question. And the fact that most practice owners can't answer it precisely — right now, today — is the problem.

Karbon's 2026 State of AI in Accounting report surveyed nearly 600 firms across six continents. 98% now use AI. Only 21% have a formal AI policy or strategy. Run that math: roughly 4 out of 5 firms have staff using AI tools with client data and no guardrails governing how, which tools, or what data goes in.

That's not an AI problem. That's a policy problem. And it has a straightforward fix.

Your team is already making security decisions for you

Here's what's actually happening in your firm right now. Someone on your team — probably several people — is pasting client financial data into a free AI tool. Bank transaction lists. P&L summaries. Client communications with names and account details. They're doing it because it works, it's fast, and nobody told them not to.

The Journal of Accountancy reported that 45.4% of sensitive AI interactions originate from personal accounts used on corporate devices. That's not a rounding error. That's nearly half your firm's AI usage potentially flowing through tools you haven't vetted, don't control, and can't audit.

The risk isn't theoretical. Free-tier AI tools — ChatGPT's free plan, for example — store conversations and use them for model training by default. In 2025, a federal judge ordered OpenAI to produce 20 million ChatGPT conversation logs as part of the New York Times copyright litigation. Your client's financial data, entered into a free AI tool by a well-meaning staff member, isn't just a training data risk. It's potentially discoverable in litigation.

Not all AI tools carry the same risk

This is where most practice owners get it wrong. They treat "AI" as a single category — either all safe or all dangerous. It's neither. There's a clear risk spectrum, and understanding it is the difference between informed policy and blanket panic.

Consumer-tier tools — free ChatGPT, free Gemini, any AI tool where you're not the customer — carry the highest risk. Your data may train future models. It's stored indefinitely. It's subject to legal discovery. There's no SOC 2 compliance, no data processing agreement, and no contractual guarantee about what happens to your input.

Professional-tier tools — ChatGPT Team, Claude Pro, paid Copilot subscriptions — are materially better. Data isn't used for model training by default. Conversations aren't shared across organizations. For most CAS practices, these tools are sufficient for real client work — categorization, reconciliation support, communications, advisory prep. They're accessible, reasonably priced, and the no-training guarantees address the core risk.

Enterprise-tier tools and API access — ChatGPT Enterprise, Claude for Enterprise, direct API usage — offer the strongest protections. SOC 2 compliance. Contractual data guarantees. Full data isolation. No training on your inputs. If your firm can access them, these are the gold standard. But enterprise tiers often require 150+ seats at $50/user/month or more — pricing that puts them out of reach for most small and mid-market CAS practices. Don't let the perfect be the enemy of the good. A professional-tier tool with a clear policy is vastly better than no policy and staff using free tools unsupervised.

The CPA Journal's November 2025 analysis on AI data security made the point clearly: the completeness and accuracy of what goes into an AI system matters as much as what comes out. Your firm's policy needs to address both sides.

What's safe to use AI for right now

It is safe to put client data into AI tools — as long as those tools don't train their models on your prompts. That's the line. Professional and enterprise-tier platforms with contractual no-training guarantees give your team a secure environment for real work: categorizing transactions, drafting client communications, building SOPs, reconciliation analysis, document processing, advisory prep. The use cases are broad, and they're productive. The question isn't whether your team should use AI with client data. It's whether they're using the right tools when they do.

The real risk isn't using AI. It's using the wrong AI. When your firm doesn't provide approved tools and clear guidelines, staff don't stop using AI — they go around you. They paste client data into free-tier tools on personal accounts because it's fast and nobody told them what to use instead. Every day without a policy is another day your team is making security decisions for you — with whatever tool is closest at hand.

The fix starts with policy. Give your team clear, written guidelines on what's approved, what data can go where, and what's off-limits. Then give them the tools that allow them to honor those policies. Make the sanctioned path easier than the workaround. If the approved option is slower, harder to access, or nonexistent, your staff will find their own — and you'll have zero visibility into where client data is going. Don't force them into the wild west by failing to provide a safe alternative.

What should be off-limits — permanently

Some things should never go into a free or consumer-tier AI tool, regardless of how convenient it is. Tax returns. Bank statements. Social Security numbers. EINs. Client names paired with financial details. Anything that would violate AICPA confidentiality standards if it appeared in a training dataset or a legal proceeding.

This isn't about being cautious with AI. It's about being precise about which AI. The same data that's off-limits in a free ChatGPT window is perfectly appropriate in a professional-tier tool with no-training guarantees. The policy doesn't say "don't use AI for this." It says "use this tool, not that one."

That distinction is the entire point. A blanket ban on AI with client data is unenforceable — your team is already using it. A specific policy that names the approved tools and draws a clear line around the consumer tier? That's enforceable. And it actually protects your clients.

The policy is the product

The fix isn't "stop using AI." That ship sailed when 98% of firms started using it. The fix is a simple, enforceable AI use policy for your accounting firm. At minimum, your policy should cover three things:

Approved tools and tiers. Name the specific tools your team is authorized to use — by name, by plan level. "ChatGPT Team" and "Claude Pro" are approved. Free ChatGPT and personal accounts are not. Remove the ambiguity.

Data classification. Define what data can go into AI tools and what can't. Client financials in a professional-tier tool with no-training guarantees? Appropriate. Social Security numbers, EINs, or tax returns in any AI tool without a data processing agreement? Hard stop.

Personal account prohibition. No client data — none — goes into AI tools accessed through personal accounts on corporate devices. This is the single largest source of shadow AI risk, and it needs to be a bright line.

That's it. One page. Three clear sections. Distribute it Monday. Enforce it Tuesday.

Your staff already knows they should be careful with AI. They don't know what "careful" means in your firm. That's your job to define — and the definition needs to be specific enough to act on. "Use good judgment" isn't a policy. "Client financial data goes into Claude Pro or ChatGPT Team only, never free-tier tools, and never on personal accounts" — that's a policy.

The firms that define this clearly will build a competitive advantage. Clients are starting to ask how their data is handled. Prospective hires are evaluating whether your firm takes AI seriously. A written, enforced AI use policy signals that your practice operates with intention — not just awareness.

Jason asked the right question. The answer isn't complicated. But it does require you to make a decision — this week, not next quarter.

I built a ready-to-use version of this policy — a one-page template written specifically for CAS practices, plus a prompt that customizes it for your firm's tools and data types in five minutes. Download the CAS AI Use Policy Kit free here. Subscribers can also access the full kit — including a direct PDF download — in the bonus section below.

Bonus content for subscribers

This post has additional content available to subscribers. Subscribe to access it.