Is AI safe to use in a law firm?

Yes, AI is safe to use in a law firm when deployed with proper governance, data protection, and human oversight. The SRA in the UK and state bar associations in the US permit AI use provided firms maintain responsibility for outputs, protect client data, and understand the technology’s limitations. The greater professional risk in 2026 is not using AI at all, as competitors who adopt it responsibly gain efficiency advantages that are difficult to match.

Short answer: Yes, with proper governance. SRA and US state bars permit AI use with safeguards. The real risk is falling behind firms that adopt responsibly while you deliberate.

Why this question matters now

Safety concerns are the primary barrier to AI adoption in law firms. Partners worry about data breaches, regulatory sanctions, negligence claims, and reputational damage. These concerns are legitimate but often based on misunderstandings about how modern AI deployments work.

The regulatory landscape has clarified significantly since 2024. The SRA’s Technology and Innovation guidance, multiple US state bar ethics opinions, and evolving case law have established a workable framework. Firms no longer have to guess what regulators expect. The question has shifted from “is AI allowed?” to “how do we use it responsibly?”

Meanwhile, 62% of UK law firms are already using AI in some capacity. The firms still deliberating are not avoiding risk by waiting. They are accumulating a different kind of risk: competitive disadvantage, higher operating costs, and declining client satisfaction relative to AI-enabled competitors.

The safety question deserves a thorough answer. Not a dismissive “it is fine” or a fearful “it is too risky.” The honest answer involves understanding specific risks and specific mitigations.

UK regulatory framework: SRA guidance

The Solicitors Regulation Authority has not issued a blanket prohibition on AI. Its approach is principles-based rather than prescriptive. The key obligations for firms using AI are:

Competence (Principle 3 and SRA Competence Statement). Firms must understand how their AI tools work at a sufficient level to supervise outputs. This does not mean partners need to understand neural network architecture. It means understanding what the tool does, what its limitations are, and when it might produce unreliable results.

Client data protection (Principle 6 and SRA Accounts Rules). Client data sent to AI systems must be protected to the same standard as any other data processing. This requires data processing agreements with AI providers, understanding where data is stored and processed, and ensuring client data is not used to train AI models without explicit consent.

Responsibility for outcomes. The SRA has been unambiguous: the firm and the individual solicitor remain responsible for work product, regardless of whether AI assisted in producing it. A solicitor who files an AI-generated document with factual errors faces the same regulatory consequences as one who made the errors manually.

Transparency. While the SRA has not mandated client disclosure of AI use in all circumstances, best practice is to inform clients when AI plays a significant role in matter handling. Several firms now include AI use provisions in their engagement letters.

US regulatory framework: state bar ethics

The American Bar Association’s Model Rules provide the foundation, with Rule 1.1 (Competence) including a duty to understand relevant technology. State implementation varies:

California (Formal Opinion 2024-1) permits AI use with requirements for competence, supervision, and client communication about AI use in billing and matter handling.

New York (NYSBA Ethics Opinion 1058) addresses confidentiality specifically, requiring firms to assess AI tools for data protection before use and to avoid tools that may use client data for model training.

Florida (Ethics Opinion 24-1) requires disclosure when AI substantially contributes to work product and prohibits billing for AI-generated work at the same rate as attorney work without client consent.

Texas (Ethics Opinion 690) focuses on supervision requirements, holding that attorneys must review AI outputs with the same diligence as reviewing work from a junior associate.

The trend across states is clear: AI is permitted with appropriate safeguards, supervision, and transparency. No state bar has prohibited AI use outright.

Legal professional privilege is the most serious technical concern. If privileged client communications are sent to an AI system without proper safeguards, there is a theoretical risk of privilege waiver if the data is accessible to third parties or used for model training.

The mitigation is straightforward but requires deliberate implementation:

Use enterprise AI agreements, not consumer tools. Enterprise agreements with OpenAI, Anthropic, Google, and Microsoft include contractual commitments that client data is not used for training, data is encrypted in transit and at rest, and access controls prevent unauthorised disclosure.

Understand data residency. UK firms should prefer AI providers offering UK or EU data processing. Where data crosses borders, ensure appropriate transfer mechanisms (UK adequacy decisions or Standard Contractual Clauses) are in place. US firms should assess state-specific privacy requirements.

Implement data classification. Not all firm data carries the same sensitivity. A data classification framework (e.g., public, internal, confidential, privileged) allows firms to apply appropriate AI tools to appropriate data categories. Routine correspondence might safely use cloud AI; privileged strategy documents might require on-premises processing.

Practical risk mitigation framework

Tier 1: Immediate (Week 1). Draft and circulate an AI acceptable use policy. Establish an approved tool list. Block access to consumer AI tools on firm networks. This costs nothing and addresses the most common risk vector: individual staff using unapproved tools with client data.

Tier 2: Foundation (Month 1). Conduct a Data Protection Impact Assessment for each AI tool. Negotiate enterprise agreements with approved providers. Implement human review requirements for all client-facing AI outputs. Begin staff training.

Tier 3: Maturity (Quarter 1). Establish ongoing monitoring of AI output quality. Implement audit trails for AI-assisted work. Update Professional Indemnity Insurance disclosure. Conduct periodic compliance reviews. Create an incident response plan for AI-related errors.

This framework is proportionate. A 10-person firm does not need the same infrastructure as a 500-person firm. The principles are the same; the implementation scales with firm size.

What we have seen at Formulaic

Every system we build includes compliance-by-design. When we deployed the intake system for Calder & Reid, data residency was a day-one requirement, not an afterthought. Client data is processed within the UK, enterprise API agreements are in place, and every AI output passes through a human review step before reaching a client.

Across 30 production systems shipped for 6 clients, we have had zero data incidents and zero regulatory issues. This is not because AI is inherently safe. It is because responsible deployment treats safety as an architectural requirement rather than a bolt-on. The firms that get into trouble are the ones where individual staff adopt consumer AI tools without governance. A structured approach eliminates this risk category entirely.

The most important safety measure is also the simplest: never deploy AI without a human review step for client-facing outputs. This single principle prevents the vast majority of quality and compliance risks while preserving the efficiency gains that make AI worthwhile.

FAQ — RELATED QUESTIONS
Does the SRA allow law firms to use AI? +

Yes. The SRA has not prohibited AI use. Its 2025 Technology and Innovation guidance states that firms can use AI but must understand how it works, ensure client data protection, maintain competence in overseeing AI outputs, and take responsibility for any AI-assisted decisions.

Can AI breach legal professional privilege? +

It can if client data is sent to AI services without proper data processing agreements. Using consumer AI tools like free ChatGPT with client data risks privilege waiver. Enterprise AI deployments with appropriate contracts and data residency controls mitigate this risk.

What do US state bar associations say about AI in law firms? +

Most state bars permit AI use with appropriate safeguards. California, New York, Florida, and Texas have issued ethics opinions. Common requirements include competence in understanding AI limitations, supervision of AI outputs, client disclosure where appropriate, and data protection.

Is it safe to use ChatGPT in a law firm? +

Consumer ChatGPT, no. Enterprise ChatGPT (via OpenAI's enterprise API with a data processing agreement), potentially yes. The distinction is data handling. Consumer tools may use your input for training. Enterprise agreements prohibit this and provide audit trails.

What happens if AI makes a mistake in a legal document? +

The firm is responsible. AI does not shift liability. This is consistent across UK and US jurisdictions. The solicitor or attorney who sends a document to a client or court is responsible for its accuracy, regardless of whether AI assisted in drafting it.

How should firms handle AI and UK GDPR? +

Treat AI processing like any other data processing activity. Conduct a Data Protection Impact Assessment, ensure lawful basis for processing, maintain records of processing activities, and ensure data subjects' rights can be exercised. Use AI providers with UK or EU data residency where possible.

Do insurers cover AI-related professional negligence claims? +

Most Professional Indemnity Insurance policies do not specifically exclude AI-related claims. However, insurers are increasingly asking about AI use in renewal questionnaires. Firms should disclose AI use proactively and ensure their risk management framework covers AI-specific scenarios.

What AI safety measures should a law firm implement? +

At minimum: an AI acceptable use policy, approved tool list, data classification framework, human review requirements for all client-facing outputs, regular staff training, incident reporting procedures, and periodic review of AI system outputs for accuracy and bias.

Andy Lackie

Founder, Formulaic. 12+ years building growth systems for professional services firms. Shipped 30 production AI systems across 6 clients.

Connect on LinkedIn →
KEEP READING

Want personalised recommendations?_

Take the AI Opportunity Scorecard for a benchmarked readiness score and three prioritised use cases specific to your firm. 3 minutes. Free.