All Insights
1 November 2025

The Confidentiality Dilemma: AI's Impossible Choice

Tim Baker
Tim Baker
Director, AlchemAI Consulting Ltd

The Confidentiality Dilemma: AI's Impossible Choice

Tim Baker

A converted sceptic with 40 years of scar tissue

AlchemAI Consulting Ltd | August 2026


Introduction

Here is the fundamental dilemma of Artificial Intelligence in any business setting: you cannot use the most powerful AI tools without putting your data into them, and you cannot put your data into them without risking its confidentiality. This is not a bug; it is a feature of how Cloud AI works. For any business that handles sensitive information — which means almost every growing company dealing with clients, contracts, commercial negotiations, or staff — this creates an uncomfortable choice between capability and confidentiality.

This is not a theoretical risk. In April 2023, employees at Samsung, one of the world's most sophisticated technology companies, accidentally leaked confidential source code and internal meeting notes by pasting them into ChatGPT. [1] They were not malicious; they were simply using the tool as designed. If it can happen at Samsung, it can happen to your business.

The Horns of the Dilemma

The dilemma has two parts, and both are equally sharp.

On the one hand, the capability is undeniable. Cloud AI models like GPT-4o and Claude 3 are astonishingly powerful. They can summarise a lengthy contract in seconds, draft a complex client proposal, or analyse a dataset for patterns that would take a human analyst days to find. The potential productivity gains are enormous. To ignore these tools is to accept a permanent competitive disadvantage as your competitors quietly adopt them.

On the other hand, the risk is structural. When you use a cloud AI tool, your data is sent to a third party — typically a US company like OpenAI, Google, or Microsoft. At that moment, you lose control. Your data may be stored on servers in the US, used to train future AI models, or accessed by US law enforcement under the CLOUD Act. [2] For a UK business, this is not just a security risk; it is a potential conflict with your legal and professional obligations under UK GDPR.

The Legal Dimension: UK GDPR and the CLOUD Act

The UK GDPR is clear: you cannot transfer personal data to a third country without adequate safeguards. The US is not automatically considered an "adequate" jurisdiction. While legal mechanisms like Standard Contractual Clauses exist, they are complex and do not override the fundamental problem of the US CLOUD Act, which allows US authorities to compel US companies to hand over data stored anywhere in the world. [3]

For a growing business, the practical implication is straightforward: if you paste a client's personal details, a commercially sensitive contract, or confidential financial information into a public AI tool, you may be in breach of your UK GDPR obligations — and you may not even know it.

The Professional Dimension: Confidentiality and Trust

Beyond the legal risk, there is a deeper issue of trust. Your clients share information with you because they trust you to protect it. That trust is the foundation of your commercial relationships.

In February 2026, a US federal court held that using a public AI platform for case analysis waived legal professional privilege. [4] The court reasoned that because the AI provider's terms of service allowed it to use data for training, there was no "reasonable expectation of confidentiality." While that ruling applies to lawyers, the principle is broader: any professional who sends client data to a third-party AI provider is, at best, in a grey area, and at worst, in clear breach of their duty of confidentiality.

Can the Risk Be Mitigated?

There are several ways to approach this, but none are perfect.

Mitigation StrategyHow it WorksThe Problem
Enterprise AccountsUse business-focused versions like Microsoft Copilot for M365 or OpenAI Enterprise, which have stronger data protection terms.Data is not used for training, but it still leaves your premises and is subject to the CLOUD Act.
Data AnonymisationRemove all personal identifiers from data before sending it to the cloud.True anonymisation is extremely difficult. AI can often re-identify individuals from seemingly anonymous data.
On-Premise AIRun a smaller AI model on your own hardware, so the data never leaves your control.The only complete solution for confidentiality, but on-premise models are less capable than their cloud counterparts.

Conclusion: A Dilemma with No Easy Answer

There is no magic bullet here. The confidentiality dilemma is real, and it requires a conscious, risk-based decision for every single use case. The answer is not to ban AI, nor is it to blindly adopt it. The answer is to understand the trade-offs and build a clear policy around them.

For low-risk tasks involving public or non-sensitive information, the power of cloud AI is a clear winner. For high-risk tasks involving confidential client data, commercially sensitive information, or personal data, on-premise AI is the only responsible choice. The challenge for every growing business is to draw that line clearly, communicate it to every member of the team, and accept that for now, the most powerful tools are also the most dangerous.

This is not a comfortable position, but it is an honest one. The first step in solving a dilemma is to admit that it exists.

References

[1] Samsung Employees Leaked Confidential Data to ChatGPT - Gizmodo

[2] CLOUD Act vs. GDPR: The Conflict About Data Access - Exoscale

[3] Prevent CLOUD Act Risks: Secure European Data - Kiteworks

[4] AI Privilege Waivers: SDNY Rules Against Privilege Protection for Consumer AI Outputs - Gibson Dunn

All InsightsDiscuss this with Tim →