Does using ChatGPT violate professional conduct rules?
Using ChatGPT does not inherently violate professional conduct rules in the UK or US. What violates the rules is using it carelessly: pasting client-identifiable data into free-tier tools that train on your inputs, filing AI-generated work without verification, or failing to understand the technology well enough to supervise it. The tool itself is neutral. Your obligations around confidentiality, competence, and supervision determine whether your use is compliant.
Short answer: Not inherently. Careless use risks breaching confidentiality and competence rules. Enterprise tiers with proper contracts, verification workflows, and a written policy make ChatGPT use compliant.
The regulatory landscape in 2026
The regulatory picture has clarified significantly since 2023, when most professional bodies were still figuring out how to respond to generative AI. We now have explicit guidance from most major regulators, and the consensus is remarkably consistent: AI tools are permitted, but professional obligations still apply in full.
UK: Solicitors Regulation Authority
The SRA updated its technology guidance in 2024 and has issued further statements since. Its position: firms may use AI tools provided they comply with existing obligations around confidentiality (Principles 6 and 7), competence (Principle 4), and client best interests (Principle 7).
Specifically, the SRA expects firms to:
- Understand how the AI tool processes data before using it
- Have appropriate data processing agreements with AI providers
- Not delegate professional judgment to AI without adequate supervision
- Verify AI outputs before relying on them
- Maintain records that allow AI-assisted work to be identified
- Comply with UK GDPR when processing personal data through AI tools
The SRA has not banned any specific tool. It has made clear that responsibility for AI outputs rests with the firm and the individual solicitor, not the technology provider.
US: ABA Model Rules and state bars
The American Bar Association’s Formal Opinion 512 (2024) addressed AI use directly. It confirmed that lawyers may use AI tools but must comply with duties of competence (Rule 1.1), confidentiality (Rule 1.6), supervision (Rules 5.1 and 5.3), and communication (Rule 1.4).
State bars have issued varying guidance:
California requires lawyers to have sufficient knowledge of AI tools to competently use them and to protect client information from unauthorised disclosure.
New York issued guidelines recommending that lawyers inform clients about AI use, verify all AI outputs, and avoid uploading confidential information to tools that may use it for training.
Florida has been more conservative, emphasising that AI outputs are not legal work product until reviewed and adopted by a lawyer, and that billing for AI-generated work requires appropriate adjustment.
Texas, Colorado, and several other states have issued ethics opinions broadly consistent with the ABA’s position: use is permitted with appropriate safeguards.
UK: ICAEW and ACCA (for accountants)
The Institute of Chartered Accountants in England and Wales and the Association of Chartered Certified Accountants both permit AI use subject to their codes of ethics. Key requirements: maintain confidentiality of client information, exercise professional competence and due care, and comply with data protection obligations. The practical implications are identical to those for solicitors.
Where firms actually get into trouble
The disciplinary cases and near-misses we have seen fall into a handful of patterns.
Fabricated citations
The most publicised case remains Mata v. Avianca (2023), where US lawyers filed a brief containing AI-generated case citations that did not exist. They were sanctioned not for using AI but for failing to verify its outputs. Similar incidents have occurred since, though firms are now more aware of the risk.
The lesson: AI language models generate plausible text, not verified facts. Every citation, every factual claim, and every legal proposition must be checked against primary sources. This is not an AI-specific obligation. It is basic professional competence applied to a new tool.
Confidentiality breaches
No disciplinary case has yet turned on confidential data being exposed through an AI tool, but several firms have had close calls. The typical scenario: a junior lawyer pastes client documents into free-tier ChatGPT, which at the time was covered by terms allowing OpenAI to use inputs for training. The data was not actually misused, but the firm had no contractual protection if it had been.
Since 2024, OpenAI has changed its default data handling for API and Enterprise users, but the free and Plus tiers still require explicit opt-out to prevent training use. This remains the single biggest compliance risk for firms using ChatGPT casually.
Failure to disclose
Several US state bars now recommend or require disclosure of AI use to clients. Firms that use AI extensively without informing clients risk a transparency issue even if the work product is competent. Best practice globally is to include AI use in engagement terms.
Billing irregularities
If a task that previously took 3 hours now takes 20 minutes with AI assistance, how do you bill for it? Billing the old rate for AI-assisted work raises conduct issues around charging for work not performed. Firms need clear billing policies that reflect the efficiency gains from AI without under-valuing the professional judgment involved in reviewing and approving AI outputs.
A practical compliance framework
Based on our work with firms across 30 production AI deployments, here is what actually works.
Step 1: Choose the right tier
Use Enterprise or API tiers only. For ChatGPT, this means ChatGPT Enterprise or the OpenAI API with a signed data processing agreement. For Claude, use the API or Teams tier. For any tool, read the data processing terms before signing up.
Never use free or consumer tiers for client work. Full stop.
Step 2: Write a firm policy
Your AI usage policy should cover:
- Approved tools: Which AI tools are permitted and on which tiers
- Prohibited inputs: What data categories must not be entered (e.g., client names, case numbers, privileged communications without anonymisation)
- Verification requirements: All AI outputs must be verified before use in client work
- Record-keeping: How AI-assisted work is logged and identifiable
- Client disclosure: How and when clients are informed about AI use
- Billing: How AI-assisted work is billed
- Training: What training staff must complete before using AI tools
- Incident reporting: What to do if data is inadvertently entered into an unapproved tool
Step 3: Train your people
A policy that nobody reads is useless. Run practical training sessions showing staff what they can and cannot do. Use real examples. Show them what a ChatGPT confidentiality breach looks like. Show them what a fabricated citation looks like. Make it concrete.
Step 4: Monitor and audit
Periodically audit AI usage. Check which tools are being used, what data is being input, and whether verification steps are being followed. Enterprise AI tools provide usage logs that support this.
Step 5: Review and update
AI tools change rapidly. OpenAI updates its terms. New tools appear. Regulators issue new guidance. Review your policy quarterly and update it when the landscape shifts.
Common misconceptions
“We cannot use AI until the regulator explicitly permits it.” No regulator has banned AI use. Waiting for explicit permission that already implicitly exists means falling behind competitors who are building capability now.
“Enterprise ChatGPT is completely safe.” Safer than free tier, certainly. But no tool is completely safe. Enterprise agreements reduce risk. They do not eliminate it. You still need verification workflows and sensible data handling.
“AI-generated work is not real legal work.” AI-generated first drafts reviewed and approved by a qualified lawyer are legal work product. The professional judgment is in the review, not the generation.
“We need to build our own AI to be compliant.” Custom-built AI can be more compliant than off-the-shelf tools, but it is not the only compliant option. Properly contracted SaaS tools meet regulatory requirements for most use cases.
What we see across our clients
Across 30 production AI systems, every one of our law firm clients has a written AI usage policy and uses enterprise-tier tools with data processing agreements. None has faced a regulatory complaint related to AI use.
The firms that handle this best treat AI governance the same way they treat any other compliance obligation: document it, train on it, audit it, and update it. The firms that struggle are the ones trying to use AI informally without governance, usually because they started with individual lawyers experimenting on free-tier tools and never formalised the approach.
The regulatory environment is not hostile to AI. It is hostile to carelessness. Firms that take compliance seriously have nothing to fear from using ChatGPT or any other AI tool professionally.
Can a solicitor use ChatGPT without breaching SRA rules? +
Yes, provided they use an enterprise tier with a data processing agreement, do not input client-identifiable data without appropriate safeguards, verify all outputs, and maintain a record of AI-assisted work. The SRA has not banned any specific AI tool.
Has any lawyer been disciplined for using ChatGPT? +
In the US, lawyers have been sanctioned for filing AI-generated briefs containing fabricated case citations without verification. The issue was failure to verify outputs, not the use of AI itself. No UK solicitor has been disciplined solely for using ChatGPT as of early 2026.
Does OpenAI's enterprise agreement protect legal privilege? +
OpenAI's Enterprise and API agreements include contractual commitments not to access, use, or train on customer data. This provides a contractual basis for privilege protection, though it has not been tested in litigation. Belt-and-braces practice is to avoid inputting privileged material where possible.
Do I need to tell clients I am using ChatGPT? +
The SRA does not explicitly require disclosure, but transparency is a core SRA principle. Best practice is to include AI use in your terms of engagement. In the US, some state bars recommend disclosure. New York's guidelines suggest informing clients about AI use in their matters.
Can accountants use ChatGPT under ICAEW or ACCA rules? +
Yes, subject to similar confidentiality and competence obligations. ICAEW's Code of Ethics requires members to maintain confidentiality and professional competence. Using ChatGPT for research, drafting, or analysis is permitted provided client data is protected and outputs are verified.
What is the difference between free ChatGPT and ChatGPT Enterprise for compliance? +
Free and Plus tiers allow OpenAI to use your inputs for model training unless you opt out. Enterprise and API tiers contractually prohibit training on your data, offer zero data retention options, and include SOC 2 compliance. For professional use, Enterprise or API is the minimum acceptable tier.
Should my firm have a ChatGPT usage policy? +
Absolutely. Every professional services firm using AI should have a written policy covering approved tools, prohibited inputs, verification requirements, client disclosure, and record-keeping. The SRA expects firms to have appropriate governance around technology use.
What about using ChatGPT alternatives like Claude or Gemini? +
The same principles apply regardless of the AI provider. Evaluate each tool's data handling policies, contractual protections, and compliance certifications. Anthropic's Claude and Google's Gemini both offer enterprise tiers with comparable data protection commitments to OpenAI.
Founder, Formulaic. 12+ years building growth systems for professional services firms. Shipped 30 production AI systems across 6 clients.
Connect on LinkedIn →Want personalised recommendations?_
Take the AI Opportunity Scorecard for a benchmarked readiness score and three prioritised use cases specific to your firm. 3 minutes. Free.