How AI buyers can enforce usage restrictions

Enterprises are progressively seeking to leverage artificial intelligence (AI) solutions to enhance current workflows, generate fresh business ideas, and maintain their competitive edge. However, before investing in such technology, they must contemplate the internal and external measures they can take to manage system usage.

A growing number of corporate AI policies are recognizing the need for controls to mitigate potential risks that may occur throughout the life cycle of an AI solution. A supplier’s unauthorized use of customer data, or the use of customer data to create custom solutions that are subsequently shared with the supplier’s larger customer base, are examples of how AI policies must be applied externally in the context of supplier relationships in order to address controls on model or implementation errors that could result in biased outcomes. This can be problematic when an AI system is being used to provide a competitive advantage.

An additional crucial factor is the possibility of regulatory non-compliance, especially when it comes to data privacy and, increasingly, AI regulation.

However, after they have determined how AI is being used by their suppliers, customers can only activate their AI policies and apply contractual controls. Many providers are pleased to showcase how AI is incorporated into their service offerings due to the enormous increase in interest in generative AI and its potential benefits for businesses. However, in cases where the engagement is not solely AI-related, it is necessary to define the term “AI” and incorporate it into a notification system. A reasonable place to start from the customer’s end would be to think about the risks that the supplier plans to reduce through contractual safeguards and, consequently, the controls that the customer wants to put in place regarding the supplier’s use of AI to reduce those risks.

Controls may take the form of appropriate use controls, the right to object to the use of AI or the way it will be used, or requirements to get consent before using a new or updated AI solution. Generally speaking, a customer looking to impose controls on the use of AI has three options:

putting restrictions on AI as a concept in general; placing restrictions on the suppliers’ present AI-specific service offering; or

putting in place controls that reflect adherence to existing laws.
When choosing a course of action, customers should take the larger commercial deal into account.

The first strategy won’t work very well if the client is aware that it is hiring AI services. If suppliers’ solution uses machine learning technology as part of its standard functionality, as is becoming more common, then they will undoubtedly oppose obligations that aim to limit the use of AI altogether. The second strategy, which can be more individualized and use case-specific, is typically preferred by suppliers.

Regarding the third strategy, we are already witnessing situations where contractual positions are based on the EU AI Act, which is presently undergoing final legislative adoption.

With four distinct risk categories, the EU AI Act regulates AI using a risk-based methodology. The EU AI Act imposes significant obligations where AI systems pose a high level of risk and beyond. Drafting examples that comply with these requirements are being observed.

The limited set of related controls are deemed appropriate to impose on the supplier’s current service offering, for instance, if the customer is hiring the supplier for an AI solution that is classified as “limited risk” under the EU AI Act.” In that situation, clients ought to try to incorporate in the contract a definition of “prohibited AI” that is linked to the EU AI Act’s “high risk” and “unacceptable risk” categories, which aren’t covered by those mechanisms.

As the global regulatory landscape continues to develop, we expect contracting approaches to evolve in a manner shaped by the EU AI Act, even though it is the only law that has informed drafting approaches to date. Other jurisdictions are contemplating their regulatory stance.

Customers of AI should make sure that their contracts with suppliers cover other crucial matters in addition to the obvious one of “is the use of AI permitted?” Contractual provisions such as enhanced supplier management may serve as compensating controls to mitigate some, but not all, of these risks if they aren’t addressed.

Examining and keeping track of
The next crucial concern for a customer who approves of AI use by its supplier will be how to ensure that the AI system is operating as intended. When purchasing software traditionally, clients would anticipate finishing comprehensive acceptance testing prior to implementing the program throughout their entire company. It will be more difficult to test an AI system, though, especially if you want to look for as many possible instances of bias, error, and compliance.

It might be nearly impossible to finish “full” testing of a complex AI system before it is put into use. Alternatively, clients may try to reduce this risk by testing new AI systems in a pilot program. For instance, they could use the solution in a separate business unit or with a separate data set, and then evaluate the results before deciding whether or not to proceed with a full-scale rollout.

Although the contract can serve as a control measure, it cannot take the place of thorough testing and continuous observation during the course of the AI system’s lifecycle. In this area, industry standards are emerging quickly, and suppliers and customers alike must take responsibility for making sure AI models are operating as intended.

Data and resources
A solid data strategy to safeguard the customer’s most important data assets is frequently necessary for the effective use of AI. In order to decide which data the supplier should have access to and under what circumstances, it is critical for a customer to understand the kinds of business and personal data that it owns or licenses from outside parties. Contractual requirements pertaining to supplier use of data should incorporate any limitations, whether they are third party or not.

Both consumers and suppliers are concerned about ownership and control of data, with suppliers becoming more and more agitated over limitations on their use of the outputs. In order to enhance their systems or create new data assets, suppliers frequently seek a broad license to use customer data, signals, derivative data, and feedback. This license can be used to enhance the AI system that the supplier is selling to other clients in addition to benefiting the customer. As long as insights are generated and improved learnings are appropriately anonymized or aggregated, there is frequently a shared benefit.

Giving suppliers this authority could have an impact on IP ownership and data protection laws where a supplier’s dataset contains personal data. As a customer, this is something that should be carefully considered. Generally, a company will gather customer data from its subjects for business-related purposes. The possibility that data will subsequently be used—especially by a third party—for auxiliary purposes like training data systems may not have been anticipated. Privacy policies would need to account for this. The origin of training datasets raises questions for suppliers. Suppliers will want guarantees that this kind of use has been planned for and that they can use these datasets lawfully without running the risk of legal repercussion.

Responsibility
Both the customer and the supplier are usually concerned about liability for generated output when contracting for AI systems, but liability clauses by themselves do not proactively manage operational risk. How easily things can go wrong with an AI solution is one of the factors that can have the largest influence on liability.

Customers and suppliers should make sure there are additional contractual controls that govern the management of operational risk, even though explicitly assigning liability, establishing significant liability limits, and incorporating warranties and indemnities in contracts do offer significant protection. Here, circuit breakers that can stop an AI system from being used if it exhibits bias or error, as well as a way to go back to a previous iteration of the AI solution that showed no signs of corruption, can be useful tools.

Leave a Reply

Your email address will not be published. Required fields are marked *