Compliance

The December 2024 AI guidance: What it means for solo practitioners

Published 15 December 2025 · 6 min read

In December 2024, the legal services regulators of NSW, Victoria, and Western Australia issued a joint statement providing guidance on artificial intelligence in legal practice. For solo practitioners exploring AI tools, this guidance creates both clarity and obligations.

Here's what you need to know.

What the guidance says

The joint statement establishes four key principles for lawyers using AI:

1. Maintain client confidentiality

Lawyers must not enter privileged or confidential information into public AI tools. This means general-purpose tools like ChatGPT or Claude (in their public versions) shouldn't receive client details, case facts, or privileged communications.

The guidance recognises the difference between public AI services (where data may be used for training) and private or enterprise AI tools (where data processing is controlled).

2. Exercise independent judgment

AI cannot substitute for your professional forensic analysis. You can use AI to assist with research, drafting, and analysis, but the legal judgment must be yours.

This isn't about avoiding AI—it's about maintaining the human professional at the centre of legal advice.

3. Personally verify AI outputs

Everything an AI produces must be verified by you before it reaches a client or court. AI can hallucinate, make errors, or produce inappropriate content. Your verification is essential.

4. Avoid unnecessary cost increases

AI use shouldn't become a way to increase client bills. If AI makes you more efficient, those savings should flow through to clients.

What this means practically

For solo practitioners, this guidance has practical implications:

Tool selection matters. When choosing AI tools, consider whether client data will be processed privately or sent to public AI services. Private AI processing—where data isn't used for training and doesn't leave controlled systems—is preferable for legal work.

Review workflows are mandatory. Any AI tool you use for client matters needs a human review step. Drafts must be reviewed before sending. Research must be verified before relying on it. Automation must include verification.

Documentation is prudent. Consider keeping records of how you're using AI tools, what verification steps you've taken, and how AI assistance relates to your billing. If questions arise, documentation protects you.

Client communication may be needed. Consider whether clients should be informed about AI use in their matters. The guidance doesn't mandate disclosure, but transparency builds trust.

Jurisdox's approach

We built Jurisdox with this guidance in mind:

Private AI processing. Client data processed through Jurisdox is not sent to public AI services and is not used for AI training. Your client information stays confidential.

Mandatory review. Every AI-generated output—intake summaries, document drafts, all of it—requires your review before any action is taken. We don't produce final documents; we produce drafts for your professional evaluation.

Complete audit trails. Every generation, every edit, every approval is logged. If you ever need to demonstrate your verification process, the records exist.

The opportunity

The guidance isn't restrictive—it's clarifying. By establishing clear expectations, the regulators have created a framework where AI adoption can proceed with confidence.

Solo practitioners who embrace AI tools thoughtfully—choosing appropriate tools, maintaining verification workflows, and documenting their processes—can capture significant efficiency gains while meeting professional obligations.

The 53% of Australian firms that haven't adopted new technology in five years aren't being cautious—they're falling behind. Clear guidance makes it easier to move forward.

Interested in Jurisdox?

Join our early access waitlist and be among the first to experience AI-powered intake and document automation.