What SaaS and Commercial Agreements Are Missing in the AI Era
Most companies are already using AI in some form — drafting content, generating code, analyzing data, or supporting internal workflows.
What hasn't caught up is the paper that governs it.
The majority of SaaS agreements, licensing arrangements, and vendor contracts in circulation today were written for a pre-AI workflow. They assume human-created deliverables, clear authorship, and clean boundaries around data use. Those assumptions no longer hold. And in many cases, the issue isn't whether AI is allowed — it's whether its use is already creating risk under agreements that say nothing about it.
A Common Scenario
A company hires a vendor to produce deliverables including: documentation, code, marketing materials, or internal tools. The vendor uses an AI tool to accelerate the work. No one discusses it.
Later, the company learns that portions of the deliverables may not be protectable IP, that confidential information may have been input into a third-party system, and that the vendor's AI usage may not align with the agreement's data restrictions. None of it was addressed in the contract.
That's not an edge case anymore.
The Disclosure Question Isn't What It Sounds Like
There is generally no standalone legal obligation, at least in most commercial contexts, to disclose that AI was used to produce deliverables. But that framing misses the more important question:
Are you already breaching your existing agreement by using AI?
The risk in most contracts isn't the absence of a disclosure obligation. It's that provisions already on the page (confidentiality clauses, data security requirements, data handling restrictions) may have been triggered the moment AI tools entered the workflow. If a vendor inputs client data into a commercial AI platform, that may constitute disclosure to a third party, violate "no external use" restrictions, or conflict with data processing obligations already in the agreement.
Silence is no longer neutral. New agreements should expressly permit or restrict AI use, define acceptable tools and guardrails, and address data handling, including whether AI platforms may retain or use inputs for training. If the agreement says nothing, both parties are relying on assumptions that may no longer reflect how work is actually being done.
Ownership of AI-Generated Work Product
This is where the legal and contractual issues intersect, and where the stakes are highest for customers.
The U.S. Copyright Office's position is clear: purely AI-generated content is not protectable, because copyright requires human authorship. Federal courts have reinforced that principle, including in Thaler v. Perlmutter (a 2023 decision rejecting copyright registration for an image generated autonomously by an AI system without meaningful human creative input).
Most real-world outputs, however, aren't purely AI-generated. They sit in a gray zone involving human prompting, editorial selection, and refinement. How much human contribution is enough to support a copyright claim remains genuinely unsettled. That uncertainty is not getting resolved in the near term. What can be said is this: the more substantive and documented the human creative contribution, in drafting, selection, and editing, the stronger the position. Passive reliance on AI output with minimal human shaping is the weakest posture.
Why the Contract Matters More Than the Doctrine Right Now
Because the law is still developing, contracts are doing most of the heavy lifting. And most contracts weren't drafted with any of this in mind.
This is not a theoretical exercise. It's a drafting problem.
Standard "Work Product" definitions assume deliverables are protectable, assignable, and clearly attributable to a human author. AI complicates all three. Work-for-hire and assignment clauses present a related issue: if the underlying output isn't protectable IP, the assignment clause may be transferring nothing of enforceable value — leaving the customer with no meaningful IP rights in what they paid for. Typical IP representations and warranties, which presuppose originality, non-infringement, and clear ownership, are increasingly difficult to stand behind when AI tools are part of the production workflow.
SaaS and platform agreements present a further wrinkle. A vendor's standard terms may provide that the vendor owns all improvements or enhancements to the platform. Whether that captures AI-assisted outputs depends entirely on how the agreement is drafted, and in most cases, it's simply unclear.
What Careful Drafting Looks Like
The practical response is to bring contracts into alignment with how work is actually being done.
For vendors and service providers, that means defining whether and how AI tools may be used, aligning that usage with existing confidentiality obligations, clarifying what constitutes "Work Product" in an AI-assisted workflow, and being precise about what IP rights are actually being transferred. It also means reviewing the terms of any AI platforms you rely on (for example, some commercial tools retain rights to use inputs in ways that may conflict with your client obligations).
For customers, it means deciding whether AI use is permitted, restricted, or subject to disclosure - and making that decision explicit in the agreement rather than leaving it to implication. Ownership assumptions in deliverables should be revisited, AI use should be addressed in IP representations and warranties, and the risk of unprotectability should be allocated deliberately rather than left to chance.
A Note on Drafting
The right language will vary depending on whether you are the vendor or the customer, the nature of the deliverables, and the sensitivity of the data involved. As an illustration, a basic AI use provision in a commercial services agreement might read:
AI Tools. Service Provider may use artificial intelligence tools in performing the Services, subject to the following conditions: (a) Service Provider shall not input any Confidential Information of Customer into any AI platform without Customer's prior written consent; (b) Service Provider shall ensure that any AI tools used do not retain inputs or outputs for platform training purposes, or shall disclose any such retention to Customer in advance; and (c) to the extent any deliverable is generated in whole or in part through the use of AI tools, Service Provider shall identify the AI-assisted portions and shall not represent such deliverables as independently protectable intellectual property without a reasonable basis grounded in human authorship.
This is illustrative language intended to show the structure of the issue only and is not a model clause for any particular transaction. The appropriate provisions will depend on context and should be tailored accordingly.
The Takeaway
AI isn't the issue. Unexamined assumptions in your contracts are.
If your agreements were drafted even 12 months ago, there's a good chance they rely on expectations about authorship, ownership, and data use that no longer reflect how work is actually being done. That gap is where the risk sits, and where careful drafting can materially change the outcome.
If your agreements haven't been reviewed with these issues in mind, now is a reasonable time to do that.
Cruxterra Law Group advises clients on commercial contracts, SaaS and technology licensing, and M&A transactions. This post is general information only and does not constitute legal advice.