Can AI technology be implemented compliantly?

Many organisations are in a hurry to implement AI-related features in their products and business tools, as the use of it has become increasingly popular. In May, Samsung banned ChatGPT internally, over a data breach, but later announced the development of its own AI tools.

Why is AI so popular?

Some AI has been around for a while, such as facial recognition, security monitoring of networks, language translation and speech recognition. Input datasets had to undergo labelling by humans before being used to train the AI model through supervised learning reinforced by a review of outputs. This was time consuming, expensive and dependent on the quality of labelling and human review.

However, recent advancements in neural networks, particularly with the introduction of the transformer deep learning model in 2017, enabled the creation of foundation models such as generative pre-trained transformer. As a result of this, major providers such as OpenAI, IBM, and Google are offering (or preparing to offer) foundation models which can be adapted to any desired downstream task, to follow instructions, summarise documents and generate novel human-like content. In addition, there are a number of resellers who will help customise the models for their clients’ needs.

What about compliance?

Whether or not AI can be implemented compliantly, depends on the AI governance. There are a number of risks that should be addressed throughout the AI lifecycle including its creation, continuous improvement, operation and use by end users. We discuss some of these below:

  • Input data. You will need to establish a lawful basis under the UK GDPR to process input data. It is widely reported that a lot of AI creators’ input data was (unlawfully) scraped from the internet without a licence and the foundation model has been created from this. While this may not have an impact on your organisation, if you use any further training and benchmark data to tweak the foundation model, you must ensure this is processed in compliance with the UK GDPR.
  • Data rights. Under UK GDPR, individuals have the right to opt-out from having their data included in a training dataset.
  • Automated decisions. The UKGDPR provides rules for making automated decisions with significant effect on individuals. Depending on how AI is deployed, a level of human intervention must be ensured.
  • Accuracy risk. AI is good at pattern recognition and reproduction of but it should not be used to make wholesale decisions or relied on without a qualified human supervisor.
  • Security. It is important that any AI system is deployed on secure infrastructure without sharing input data with the provider or, if data sharing cannot be avoided, by using secure methods (such as encryption) to safeguard the data.

Remember to…

If you are looking to use AI, then you should not accept any AI provider terms on the basis that they are “industry standard”. Currently there are no such “industry standard” contractual terms. Appropriate commitments in relation to compliance with the law, non-infringement, IP rights in outputs, product liability, transparency, issue reporting, continued technical support and other matters, should be sought.

In addition, organisations exploring AI should adopt an AI governance programme which will allow them to address all risks arising in the context of their use cases and specific circumstances.

Like to talk about this Insight?

Get Insights in your inbox

Subscribe Now
To Top