How should professional services firms handle AI and data privacy?

Professional services firms should handle AI and data privacy the same way they handle any data processing: with enterprise-grade contracts, documented lawful bases, appropriate security measures, and clear records of what data is processed, why, and where. Use enterprise-tier AI tools with data processing agreements. Never process client data through free or consumer-tier tools. Conduct data protection impact assessments for AI systems handling sensitive client information. The regulatory framework is not new. The technology is new. Your existing data protection obligations apply in full.

Short answer: Enterprise-tier tools, data processing agreements, DPIAs for sensitive data, records of processing, and never use free-tier AI for client work. Same framework, new technology.

The regulatory landscape

UK GDPR and the Data Protection Act 2018

UK GDPR applies to any processing of personal data, regardless of whether that processing involves AI. If your AI system reads, analyses, categorises, or generates content based on personal data, you are processing personal data and all GDPR obligations apply.

Key obligations for AI systems:

Lawful basis. You need a lawful basis for processing personal data through AI. For most professional services firms, this is legitimate interests (processing necessary for your business purposes where the individual’s rights do not override) or contract performance (processing necessary to deliver the service the client has engaged you for). Consent is rarely the right basis because it can be withdrawn, creating operational complexity.

Purpose limitation. Data collected for one purpose cannot be repurposed for another without further lawful basis. Client data collected for legal representation cannot be used to train your own AI models without explicit consent or a separate lawful basis.

Data minimisation. Process only the personal data necessary for the task. If your AI system only needs a transaction amount and date, do not feed it the client’s full name and address. Design AI inputs to be as minimal as possible.

Accuracy. AI outputs involving personal data must be accurate. If an AI system generates a client report with inaccurate personal data, that is a GDPR compliance issue. Verification of AI outputs is not just good practice. It is a legal obligation.

Storage limitation. Do not retain personal data in AI systems longer than necessary. Implement data retention policies that cover AI processing logs, cached inputs, and generated outputs.

Security. Appropriate technical and organisational measures to protect personal data processed by AI. This means encryption, access controls, audit trails, and security testing.

US privacy laws

The US landscape is a patchwork of state laws and sector-specific regulations:

California Consumer Privacy Act (CCPA/CPRA) gives California residents rights over their personal information, including the right to know what information is collected and how it is used. AI systems processing Californian client data must comply.

Colorado Privacy Act, Connecticut Data Privacy Act, Virginia Consumer Data Protection Act and similar state laws impose comparable obligations with varying specifics.

Attorney-client privilege adds a layer beyond general privacy law. Privileged communications processed through AI systems must maintain their privileged status. This requires contractual protections with AI providers ensuring they cannot access or disclose the content.

Accountant-client privilege exists in some US states and under federal tax law (IRC Section 7525). The same protective measures apply.

HIPAA applies if your professional services work involves protected health information (common in personal injury, medical malpractice, and healthcare advisory work).

Sector-specific requirements

SRA (UK solicitors): Principles 6 and 7 require confidentiality and acting in clients’ best interests. The SRA expects firms to conduct due diligence on AI providers and protect client data.

ICAEW/ACCA (UK accountants): Codes of ethics require confidentiality of client information. The obligation extends to third-party processors, including AI providers.

FCA (UK financial services): Firms regulated by the FCA have additional obligations around data security, outsourcing, and operational resilience that apply to AI systems.

ABA Model Rules (US lawyers): Rules 1.6 (confidentiality) and 1.1 (competence) require lawyers to understand the technology they use and protect client information from unauthorised disclosure.

Practical implementation

Step 1: Map your AI data flows

Before addressing privacy, you need to know what data flows through your AI systems. Document:

  • What personal data enters the AI system (inputs)
  • What the AI system does with that data (processing)
  • What outputs the AI system generates
  • Where data is stored during and after processing
  • Who has access to the data at each stage
  • How long data is retained

This mapping is not just good practice. It is required under UK GDPR (Article 30, records of processing activities) and supports DPIA requirements.

Step 2: Choose enterprise-grade AI tools

The minimum requirements for any AI tool processing client data:

Data processing agreement. A legally binding agreement specifying how the provider handles your data, what security measures they implement, and what happens in a breach. Standard DPAs from OpenAI Enterprise, Anthropic, Azure, and Google Cloud meet UK GDPR requirements.

No training on your data. Contractual commitment that your inputs are not used to train or improve the provider’s models. Enterprise tiers of major providers include this. Free and consumer tiers typically do not.

Data residency. The ability to specify where your data is processed and stored. For UK firms, UK data residency eliminates cross-border transfer concerns. For US firms, domestic processing avoids international transfer complications.

Zero data retention option. The ability to ensure the provider does not retain your inputs or outputs after processing. This minimises the risk window.

SOC 2 Type II certification. Independent verification of the provider’s security controls. This is standard for enterprise AI providers.

Encryption. Data encrypted in transit (TLS 1.2+) and at rest (AES-256 or equivalent). Non-negotiable.

Step 3: Conduct data protection impact assessments

A DPIA is required when processing is likely to result in high risk to individuals. For professional services, this includes:

  • AI systems processing legal case data (often includes special category data)
  • AI systems processing financial data at scale
  • AI systems making or supporting decisions that affect individuals
  • Any novel use of AI technology with personal data

The DPIA should document:

  • The nature, scope, context, and purpose of the processing
  • The necessity and proportionality of the processing
  • The risks to individuals and how they are mitigated
  • The measures taken to address those risks

This is not a one-time exercise. Review and update DPIAs when the AI system changes, when new data categories are processed, or when the risk profile changes.

Step 4: Implement technical safeguards

Pseudonymisation. Where possible, replace identifying information with tokens before sending data to AI systems. The AI processes “Client_1247” instead of “John Smith.” Re-identification happens locally after processing. This reduces the impact of any breach.

Access controls. Limit who can access AI systems and what data they can process through them. Not every staff member needs access to every client’s data through AI. Implement role-based access consistent with your existing data access policies.

Audit trails. Log what data is sent to AI systems, by whom, when, and for what purpose. This supports compliance monitoring, breach investigation, and data subject access requests.

Data retention automation. Automatically purge AI processing logs and cached data according to your retention policy. Do not rely on manual deletion.

Step 5: Train your team

Data privacy training for AI should cover:

  • Which tools are approved for which data types
  • How to pseudonymise data before AI processing
  • What to do if data is accidentally sent to an unapproved tool
  • How to respond to client questions about AI use
  • When to escalate privacy concerns

Make training practical. Show real examples of compliant and non-compliant AI use. Test understanding. Refresh annually.

Step 6: Plan for data subject rights

Under UK GDPR and US state laws, individuals have rights over their data. Your AI systems need to support:

Right of access. You must be able to identify and provide copies of personal data processed through AI systems when requested.

Right to erasure. You must be able to delete personal data from AI systems and processing logs when requested (subject to legal retention requirements).

Right to object. Individuals can object to processing based on legitimate interests. If a client objects to their data being processed through AI, you need a process for excluding their data.

Right to explanation. Under certain circumstances, individuals have the right to meaningful information about the logic of automated decision-making. If your AI system makes or supports decisions affecting individuals, you should be able to explain how.

Common mistakes

Treating AI as different from other data processing. The same rules apply. If you would not send client data to an unvetted third party for manual processing, do not send it to an unvetted AI tool for automated processing.

Relying on the AI provider’s general terms. General terms of service are not data processing agreements. Enterprise contracts with specific DPA provisions are required for processing personal data.

Ignoring API call data. Even if your AI provider does not retain your data, the API call itself transmits data over the internet. Ensure encryption in transit and understand the provider’s network architecture.

Assuming anonymisation is easy. Removing names and addresses is not sufficient anonymisation if the remaining data can be used to identify individuals (common with detailed case or financial information). Use pseudonymisation rather than claiming anonymisation you cannot verify.

Not updating policies. Your privacy policy, data protection policy, and client terms should reference AI processing. If clients are not informed that their data may be processed through AI systems, that is a transparency failure.

What we implement at Formulaic

Every AI system we build includes privacy by design: data minimisation in system inputs, pseudonymisation where appropriate, enterprise-grade hosting with UK data residency, comprehensive audit trails, and documented data flows. We provide DPIA templates specific to each system and include privacy compliance in our standard delivery documentation.

Across 30 production systems, we have maintained zero data privacy incidents. Not because privacy is easy, but because we build it into the architecture from day one rather than treating it as an afterthought.

The firms that handle AI privacy best treat it as a feature, not a constraint. Clients trust firms that can articulate how their data is protected. That trust is a competitive advantage.

FAQ — RELATED QUESTIONS
Does UK GDPR apply to AI systems used by law firms? +

Yes. If your AI system processes personal data (names, addresses, financial details, case information), UK GDPR applies in full. You need a lawful basis for processing, appropriate security measures, and compliance with data subject rights. The fact that processing is automated does not reduce your obligations.

Do I need a data protection impact assessment for AI? +

If your AI processing involves systematic evaluation of personal aspects, large-scale processing of special category data, or new technology applied to sensitive data, yes. Most professional services AI systems that handle client data will trigger the DPIA requirement.

Can AI providers use my client data to train their models? +

Free and consumer tiers of most AI tools allow training use unless you opt out. Enterprise and API tiers contractually prohibit training on your data. Always use enterprise tiers and verify the contractual position. Read the data processing agreement, not just the marketing page.

What US privacy laws apply to AI in professional services? +

The California Consumer Privacy Act and similar state laws in Colorado, Connecticut, Virginia, and others impose obligations on handling personal information through AI. Additionally, attorney-client privilege, accountant-client privilege, and professional conduct rules add sector-specific obligations.

How should I handle data subject access requests involving AI? +

You must be able to identify what personal data your AI systems hold and process. This means maintaining records of data flows through AI systems, the ability to extract or delete specific individual's data, and transparent processing records. Design this capability into your AI systems from the start.

Is anonymisation a viable approach for AI data privacy? +

Pseudonymisation (replacing identifiers with tokens) reduces risk and can simplify compliance. True anonymisation (irreversible removal of all identifying information) eliminates GDPR obligations but is difficult to achieve reliably. Most firms use pseudonymisation for AI processing and re-identify outputs when needed.

What happens if there is a data breach involving an AI system? +

The same breach notification rules apply as for any data processing. Under UK GDPR, notify the ICO within 72 hours if the breach poses a risk to individuals. Notify affected individuals without undue delay if the risk is high. Your incident response plan should include AI systems specifically.

Do cross-border data transfer rules affect AI use? +

Yes. If your AI provider processes data outside the UK or EEA, you need appropriate transfer safeguards: adequacy decisions, standard contractual clauses, or binding corporate rules. Most major AI providers offer UK and EU data residency options that avoid transfer issues entirely.

Andy Lackie

Founder, Formulaic. 12+ years building growth systems for professional services firms. Shipped 30 production AI systems across 6 clients.

Connect on LinkedIn →
KEEP READING

Want personalised recommendations?_

Take the AI Opportunity Scorecard for a benchmarked readiness score and three prioritised use cases specific to your firm. 3 minutes. Free.