The hidden IP risks in your AI software contracts - a deep dive into the T&Cs
Generative AI has rapidly become a standard tool for enhancing business productivity. However, while adoption is widespread, understanding of the legal Terms and Conditions (T&Cs) governing these tools often lags behind.
To assist organisations in making informed procurement and governance decisions, we conducted an in-depth analysis of the T&Cs for the most commonly used off-the-shelf AI tools, including ChatGPT, Clause, Gemini, Meta AI, Microsoft Copilot, Amazon Q, and Adobe Firefly.
While functionality is often similar across platforms, the legal protections, specifically regarding Intellectual Property (IP), vary significantly. For business leaders, relying on assumptions rather than the fine print can expose the organisation to material legal risk.
Here are key themes emerging from the contracts of major AI providers.
1. The "Free Tier" Liability Gap
A clear distinction exists between free and paid versions of AI tools.
Most major providers have announced they will indemnify customers against third-party IP infringement claims (e.g., if the AI generates content that inadvertently copies a protected work). However, our review confirms that these indemnities are almost exclusively reserved for paid subscriptions.
Strategic impact: If your staff are using free versions of tools like ChatGPT, Microsoft Copilot or Google Gemini for business work, your organisation likely has zero protection from the provider if that content infringes on a third party's IP. You are effectively self-insuring against copyright lawsuits.
2. The limits of indemnity (even when you pay)
For enterprise users paying for subscriptions, IP indemnities are available, but they are not absolute. The "devil is in the detail" regarding exclusions.
In many T&Cs, the provider's obligation to cover your legal costs is voided if:
You customised or "fine-tuned" the model.
You integrated the AI tool into another application.
You failed to implement the provider’s specified safety filters.
The infringement concerns a trademark rather than copyright (which is excluded in three of the tools reviewed).
Furthermore, some providers impose strict liability caps, meaning that even if they accept responsibility for a breach, the financial compensation may be capped at a low amount that does not cover the full damage to your business.
3. Ownership of output is not guaranteed
A common assumption is that if your employee generates content using AI, your company owns it. The legal reality is more complex.
While some T&Cs acknowledge user ownership, others lack clear assignment provisions. More importantly, under current Australian law, copyright might not even subsist in AI-generated works because they lack a human author.
Strategic Implication: Businesses should be cautious about using AI to generate core intellectual property (such as logos or proprietary code) where exclusive ownership is critical to the company's valuation or competitive advantage.
4. Your data and "Reverse Indemnities"
The flow of rights is not one-way. By using these tools, users often grant broad, permissive licenses to the AI providers to use the input data.
Additionally, many T&Cs require the user to provide IP warranties and indemnities to the provider. This means if your employee inputs confidential, privileged, or third-party data into the AI, and that action leads to a legal issue for the AI provider, your company could be liable to indemnify the tech giant.
5. Jurisdiction
Finally, enforcement of rights remains a practical hurdle. The T&Cs for many of these products are governed by the laws of the United States or Ireland. Dispute resolution clauses often require proceedings to take place in those jurisdictions.
For an Australian business, the cost and complexity of litigating in a foreign court may render any theoretical rights or protections unenforceable in practice.
Key takeaway
The productivity gains from Generative AI are likely and tangible, but they require a sophisticated approach to risk management. The "click-through" agreement is no longer a formality but is a significant allocation of corporate risk.
Actions items to perform immediately
Audit your AI tools: Review the T&Cs of every AI tool currently in use at your business. Create an "allowlist" for safer, paid enterprise tools and a "denylist" for high-risk or free tools.
Update governance policies: Educate staff on the specific risks of inputting proprietary or confidential data into these models.
Establish a response protocol: Ensure you have a process for handling IP claims that complies with your provider's indemnity conditions (e.g., notification windows).
Review downstream contracts: Consider whether your customer contracts need to be updated to reflect the IP risks associated with your use of AI tools.

