Is AI development in the consumers best interests?
The Artificial Intelligence (“AI”) industry has been facing some scrutiny over the pace at which it is developing technology to mimic human behaviour.
What is AI?
AI is technology that enables a computer to think or act in a more ‘human’ way. It does this by taking in information from its surroundings and deciding its response based on what it learns or senses. An example of AI is the software behind ChatGPT, which is a chatbot that can write essays and can have conversations in a ‘human’ way. Some have warned that tools such as ChatGPT could end up displacing hundreds of millions of jobs.
Geoffrey Hinton, who is a cognitive psychologist and computer scientist, most noted for his work on artificial neural networks, warned about the growing dangers from developments in the field as he recently quit his job at Google. In early May 2023, he revealed in an interview that AI might soon surpass the information capacity of the human brain. He explained that chatbots have the ability to learn independently and share knowledge, and this means that whenever one copy acquires new information, it is automatically disseminated to the entire group. This ultimately allows AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual.
In March 2023, key figures in AI called for powerful AI systems to be halted for at least six months amid concerns about the threats they posed. Twitter chief Elon Musk and Apple co-founder Steve Wozniak were among those to sign an open letter warning of the risks. Some say the race to develop AI systems is out of control.
The Competition and Markets Authority’s review
Foundation models, which include large language models and generative AI, that have emerged over the past five years, have the potential to transform what people and businesses do. To ensure that innovation in AI continues in a way that benefits consumers, businesses and the UK economy, the government has asked regulators, such as the Competition and Markets Authority (“the CMA”), to think about how the innovative development and deployment of AI can be supported against five overarching principles:
- safety, security and robustness
- appropriate transparency and explainability
- accountability and governance
- contestability and redress.
In light of this, the CMA is opening an initial review of competition and consumer protection considerations in the development and use of AI foundation models. The review seeks to understand how foundation models are developing and produce an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future.
This initial review will:
- examine how the competitive markets for foundation models and their use could evolve;
- explore what opportunities and risks these scenarios could bring for competition and consumer protection; and
- produce guiding principles to support competition and protect consumers as AI foundation models develop.
The review will also look at whether AI provides an unfair advantage to companies that are able to afford the technology.
Sarah Cardell, Chief Executive of the CMA, said:
“It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection”.
The CMA said it would consult experts and business leaders and come up with a set of “guiding principles” to protect consumers as AI develops and they are seeking views and evidence from stakeholders. The CMA welcomes submissions by 2 June 2023. The CMA encourages interested parties to respond and be proactive in identifying relevant evidence.
Following evidence gathering and analysis, the CMA will publish a report which sets out its findings in September 2023.