There have been recent interventions made by the UK and US governments in the race to develop ever more powerful artificial intelligence (AI) technology. In the UK, the Competition and Markets Authority (CMA) has launched a review of the sector, focusing on the underlying systems – or foundation models – behind AI tools. The review will look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and formulate “guiding principles” to support competition and protect consumers. The CMA aims to publish its findings in September.
Meanwhile, the US government has also announced measures to address the risks in AI development, as Vice President Kamala Harris met with chief executives at the forefront of the industry’s rapid advances. In a statement, the White House said firms developing the technology had a “fundamental responsibility to make sure their products are safe before they are deployed or made public”. The administration also said it would invest $140m (£111m) in seven new national AI research institutes, to pursue artificial intelligence advances that are “ethical, trustworthy, responsible, and serve the public good”.
The interventions by both governments come as regulators face mounting pressure to act on concerns raised by the emergence of AI-powered language generators, such as ChatGPT, which have raised concerns about the potential spread of misinformation, a rise in fraud, and the impact on the jobs market. Elon Musk was among nearly 30,000 signatories to a letter published last month (April 2023) urging a pause in significant AI projects.
The CMA review and the US government’s measures have been met with both praise and criticism. Sarah Cardell, a competition partner at UK law firm Fladgate, said AI had the potential to “transform” the way businesses competed, but that consumers must be protected. Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White House’s announcement as a “useful step” but said more aggressive action is needed, including a moratorium on the deployment of new generative AI technologies, the term for tools such as ChatGPT and Stable Diffusion.
A succession of scientists and business leaders have issued warnings about the speed at which AI technology could disrupt established industries. On Monday (May 1st, 2023), Geoffrey Hinton, the “godfather of AI”, quit Google in order to speak more freely about the technology’s dangers, while the UK government’s outgoing scientific adviser, Sir Patrick Vallance, urged ministers to “get ahead” of the profound social and economic changes that could be triggered by AI, saying the impact on jobs could be as big as that of the Industrial Revolution.
The leading players in AI are Microsoft, ChatGPT developer OpenAI – in which Microsoft is an investor – and Google parent Alphabet, which owns a world-leading AI business in UK-based DeepMind, while leading AI startups include Anthropic and Stability AI, the British company behind Stable Diffusion. Leading AI developers have agreed to their systems being publicly evaluated at this year’s Defcon 31 cybersecurity conference. Companies that have agreed to participate include OpenAI, Google, Microsoft, and Stability AI.
Finally, the EU was told on Thursday that it must protect grassroots AI research or risk handing control of the technology’s development to US firms. An open letter coordinated by the German research group Laion – or Large-scale AI Open Network – was sent to the European parliament, warning that one-size-fits-all rules risked eliminating open research and development, and could “entrench large firms” and “hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas”. The letter urged the European parliament to avoid eliminating open-source R&D, which would leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.