For the first time, the UK government has acknowledged the “existential” risk posed by artificial intelligence (AI). The prime minister, along with Rishi Sunak and Chloe Smith, the secretary of state for science, innovation, and technology, held a meeting with the heads of prominent AI research groups to address concerns regarding safety and regulation.
During the meeting, the chief executives of Google DeepMind, OpenAI, and Anthropic AI engaged in discussions on how to effectively moderate the development of AI technology to mitigate potential catastrophic risks. In a joint statement, the participants highlighted their conversations on safety measures, voluntary actions being considered by labs to manage risks, and the potential for international collaboration on AI safety and regulation.
Emphasizing the need to keep pace with the rapid advancements in AI, the prime minister and CEOs explored various risks associated with the technology, ranging from disinformation and national security concerns to existential threats. They agreed on the importance of the UK government working closely with the labs to ensure that their approach aligns with global innovations in AI.
This meeting marked a significant shift in Rishi Sunak’s stance as he acknowledged the potential “existential” threat posed by the development of “superintelligent” AI without appropriate safeguards. It contrasted with the UK government’s generally positive approach to AI development. Sunak is scheduled to meet with Sundar Pichai, the CEO of Google, to further refine the government’s approach to regulating the AI industry. Pichai himself expressed the view that AI is too important not to be regulated, emphasizing the need for effective regulation.
Sam Altman, CEO of OpenAI, added to the discussion by calling for the establishment of an international body, similar to the International Atomic Energy Agency, to regulate the development of AI and control its speed. Altman emphasized the need for a serious approach to AI, comparable to the regulation of nuclear material if someone were to achieve “superintelligence.”
The UK’s approach to AI regulation has faced criticism for being too lenient. Stuart Russell, a professor of computer science at the University of California, Berkeley, voiced concerns over the UK’s reliance on existing regulators rather than formulating comprehensive regulations that address a wide range of impacts, including effects on the labour market and existential risks.
The acknowledgement of the “existential” risk of AI by the UK government represents a significant step towards recognizing the potential dangers associated with unchecked AI development. It highlights the necessity of establishing robust safety measures and effective regulations to safeguard against potential risks and ensure responsible AI advancement.