Neither of us could have come to this position alone

Neither of us could have come to this position alone

The search engine problem

Most practitioners use AI the way they use Google — type a question, get an answer, move on. For simple tasks, that works. Look up a filing deadline. Draft a standard engagement letter. Summarise a regulation. The AI gives you something serviceable, you clean it up, you're done.

But for the work that actually matters — the complex tax positions, the regulatory interpretations, the advisory arguments that shape a client's financial future — this approach is dangerous. Not because AI gives wrong answers. Because it gives plausible answers that you're not equipped to test if you're treating it as an oracle.

A plausible answer to a complex tax question isn't the same as a defensible position. Defensible means you've tested the reasoning against counter-arguments. You've checked whether your reading of the statute is the only reasonable reading. You've mapped the interactions between provisions and confirmed they work the way you think they do. That testing process is where the real analytical work happens — and it doesn't happen when you ask a question and accept the response.

What adversarial dialogue actually looks like

The cross-border research worked differently. I brought two decades of practice experience and the client-specific context — the investment structure, the residency timeline, the filing history. The AI brought the ability to hold every statutory provision in play simultaneously, test logical consistency across a multi-layered argument, and surface interactions I hadn't mapped.

When I proposed a position, the AI didn't agree and elaborate. It analysed the argument and came back with the strongest counter-arguments. When the AI proposed something, I didn't accept it because it sounded authoritative. I pushed on the assumptions. I asked what changes if that reading is wrong. I pointed to practical realities that contradicted the theoretical position.

That's how we caught the error. I was building on one legal basis. The AI flagged that the statutory text didn't support that characterisation — the actual qualification mechanism was structural, not functional. I pushed back. The AI held its ground and showed me why. I re-read the provision and realised it was right. The entire argument needed to be rebuilt on a different foundation.

Neither of us would have caught it alone. I wouldn't have re-examined a characterisation that felt intuitively correct. The AI wouldn't have known that the characterisation mattered in the first place without my domain context driving the inquiry. The error surfaced because the adversarial process forced both sides to test every assumption — not once, but repeatedly.

This isn't for every interaction

Not every conversation with AI needs to be a Socratic debate. If you're categorising bank transactions or drafting a standard letter, just let it work. The adversarial approach is for the problems where being wrong has consequences — where the position needs to survive scrutiny from a revenue authority, a client, or a court.

For CAS practitioners, that's more of your work than you might think. Tax research. Entity structure recommendations. Cross-border compliance. Advisory positions on cash flow, pricing, or business strategy. Regulatory interpretations. Every one of these involves judgment calls where multiple readings are possible and the "right" answer depends on how well you've tested the alternatives.

The practitioners who use AI as a search engine for these questions will get plausible output that occasionally contains material errors they'll never catch. The practitioners who use AI as an adversarial thinking partner will produce positions that are stronger than anything either party could build independently.

Your blind spots aren't going away

You've been in practice long enough to know that expertise creates its own blind spots. The things you're most confident about are the things you're least likely to re-examine. That's exactly where the errors live — in the assumptions that feel so obviously correct that nobody tests them.

AI doesn't have your blind spots. You don't have its. When you force the collision — when you argue, push back, demand better reasoning from both sides — you get positions that neither of you could have built alone. That's not a productivity improvement. That's a fundamentally different quality of work.

The cross-border research didn't just produce a better submission to CRA. It changed how I approach complex analysis entirely. And what happened next — turning that process into a reusable methodology that anyone in the firm can follow — is the second half of the story.

This is the kind of work transformation that AI Practice Transformation is built for. Not "here's a tool" — but "here's how to redesign your complex advisory work around what AI makes possible." Over three weeks, one day per week, we map your firm's highest-judgment decisions, rebuild them as adversarial processes, and give your team a repeatable system they can use on everything from entity structure to cross-border compliance. The result is better positions, faster analysis, and work that's genuinely defensible. If your firm is ready to transform how you do the complex work, visit theaiaccountant.ai/transformation.

More on that in the next piece. In the meantime, if you want the research dialogue skill we built from this process, DM me.