Confidentiality in the Age of AI

Updating NDAs to include AI-specific clauses is not simply a defensive move. It also signals to clients that the provider understands the technology, recognises the risks, and is proactively managing them. This can become a competitive advantage in an industry where trust is a core asset.

Confidentiality in the Age of AI

Why Your NDAs Probably Require an Upgrade

Getting your Trinity Audio player ready...

Confidentiality agreements have always been the quiet backbone of collaboration. They enable companies to share sensitive information without fear that it will be leaked, misused, or fall into the hands of a competitor. For years, these agreements assumed that “sharing” meant passing along a document, an email, or perhaps a presentation. That assumption is now outdated.

The rapid adoption of generative AI tools has introduced a new category of risk that most standard nondisclosure agreements (NDAs) were never designed to address. When people paste strategic plans, technical specifications, or roadmap details into an AI model for help summarising or improving them, they may inadvertently disclose protected information to a system outside their control.

💡
If the AI tool retains those inputs or uses them for training, the information is no longer fully private.

This is not a theoretical concern. In workshops and brainstorming sessions, it is increasingly common for someone to open a public AI tool like ChatGPT to capture notes, rewrite a summary, or suggest new ideas. If the AI tool retains those inputs or uses them for training, the information is no longer fully private. The result is a legal gap that many organisations are only just beginning to notice.

Why Traditional NDAs Fall Short

The key weakness is that older NDAs rarely contemplate disclosure to an automated system. They focus on human-to-human sharing, such as handing over a file, sending an email, or giving a presentation. They do not explicitly cover the act of typing or pasting confidential material into a third-party system that is hosted, controlled, and trained by an external provider.

💡
If the NDA does not prohibit that specific scenario, enforcing confidentiality becomes significantly more challenging.

Most generative AI platforms are not designed as secure, isolated environments for each user. Depending on the provider and settings, user inputs may be stored, logged, or used for further training purposes. That creates a risk that the original confidential information could be accessible to others or indirectly surface in another context. If the NDA does not prohibit that specific scenario, enforcing confidentiality becomes significantly more challenging.

The New Breed of AI-Specific NDA Clauses

Recognising this gap, legal teams are beginning to insert AI-specific language into NDAs and master service agreements. The most common elements include:

  • An explicit ban on using public AI/ML tools without consent
  • Wording typically prohibits uploading, processing, or disclosing any confidential information to a publicly available AI model unless the other party has given written approval.
  • Assurances about data handling

Even when an AI tool is approved, parties may require written proof that the tool will not retain, share, or train on the provided data.

Extending obligations to subcontractors and partners

Anyone who handles the information, directly or indirectly, must be bound by the same restrictions. This prevents a subcontractor from bypassing the rules.

Allowing use only with “commercially reasonable assurances”

Some contracts permit the use of AI if the provider can demonstrate technical safeguards, such as data isolation, encryption, and strict retention limits.

This type of language is already appearing in contracts for industries that deal heavily in trade secrets, intellectual property, or regulated data.

Real-World Adoption and Guidance

The shift is not confined to private companies. In government and research settings, AI clauses are becoming formal policy. The U.S. National Institutes of Health now forbids peer reviewers from using generative AI to draft or summarise review comments unless specifically authorised. The concern remains the same: once sensitive material leaves the controlled environment, confidentiality cannot be guaranteed.

💡
The duty of confidentiality applies just as strongly to a typed prompt as to a printed contract.

Professional bodies are also weighing in. Legal ethics committees have reminded lawyers that they must never feed client-confidential material into a public AI tool that retains or trains on the input without informed consent. The duty of confidentiality applies just as strongly to a typed prompt as to a printed contract.

Designing NDAs for the AI Era

Updating confidentiality agreements for the age of AI is not complicated, but it does require precision. Broad or vague language is less effective than clauses that clearly define the boundaries.

Add explicit AI-use restrictions

Include language that forbids sharing confidential information with public AI systems unless written consent is granted. Define what constitutes a “public” system and distinguish it from private, on-premise, or vendor-hosted tools that require strict safeguards.

Demand explicit assurances from vendors

If an AI tool is approved, it requires technical documentation confirming that it does not retain or train on the data, that inputs are encrypted in transit and at rest, and that each customer’s environment is logically separated from others.

Bind all downstream parties

Your NDA should make clear that subcontractors, sub-processors, and affiliated partners are subject to the same AI restrictions. This closes the loophole where an indirect partner might use a risky tool.

Educate and enforce

Even the best NDA is only effective if people understand and follow it. Provide training to employees and partners on what counts as confidential, how to handle AI tools, and when to seek permission before using them.

Align with broader legal obligations

Clauses must coexist with other legal requirements, such as whistleblower protections, trade secret laws, and privacy regulations like the GDPR. Overly strict wording that appears to gag lawful reporting could be challenged in court.

Risks Beyond Contracts

While contracts are the most visible safeguard, they are not the only protection needed. The human factor plays a major role. In many cases, the decision to paste sensitive content into an AI tool is made on the spot, without time to consider the legal or security implications.

This makes internal policy essential. A well-written AI usage policy can outline the dos and don’ts, provide examples of safe and unsafe behavior, and direct staff toward approved tools and workflows. Combined with updated NDAs, such policies create both a contractual obligation and a cultural expectation.

Implications for LSPs and Similar Businesses

Language service providers (LSPs) work with client content that is often highly sensitive, including unreleased product materials, legal drafts, medical documentation, and marketing campaigns. Any accidental exposure could have severe reputational and legal consequences.

Updating NDAs to include AI-specific clauses is not simply a defensive move. It also signals to clients that the provider understands the technology, recognises the risks, and is proactively managing them. This can become a competitive advantage in an industry where trust is a core asset.

💡
Ensuring that the same AI usage rules bind every link in the chain is essential.

For LSPs, the stakes are especially high because confidentiality breaches can occur at multiple points in the workflow, including among translators, editors, subcontracted vendors, and even automated QA tools. Ensuring that the same AI usage rules bind every link in the chain is essential.

A Practical Path Forward

If your organisation has not yet reviewed its NDAs for AI-related risks, here is a straightforward idea:

  • Audit your current agreements to see if they mention AI, machine learning, or automated systems.
  • Identify your high-risk scenarios, such as live workshops, vendor collaborations, or client review cycles.
  • Draft precise language that addresses those scenarios directly, drawing on examples now in common use.
  • Integrate those clauses into both new agreements and renewals of existing contracts.
  • Educate your teams and partners to make sure the changes are understood and followed.

The Environment Has Changed

The role of NDAs is to create a clear, enforceable framework for keeping shared information private. That mission has not changed, but the environment has. Generative AI tools make it easier than ever to process and refine information in real time, but they also create new routes for data to escape controlled channels.

By adding AI-specific clauses, demanding clear assurances from technology vendors, and extending protections throughout the supply chain, companies can keep pace with this shift. Just as importantly, by educating their people and partners, they can prevent the sort of casual, well-intentioned disclosures that undermine confidentiality.

The companies that adapt now can not only reduce their legal and security risks but also position themselves as trustworthy partners in a landscape where both innovation and caution are essential.

Disclaimer: This article is for general informational purposes only and does not constitute legal advice. Reading it does not create an attorney-client or other professional relationship. Laws and contractual requirements vary by jurisdiction, industry, and specific circumstances. Always seek advice from a qualified legal professional in your jurisdiction before taking action or making decisions.


About the Author
CTA Image

Simon Hodgkins,
CMO • President • Editor-in-Chief

Contact Simon