
A group of technology companies has announced a new fund to finance the development of safety research for artificial intelligence (AI). The AI Safety Fund is supported by Microsoft, Google, Anthropic, and OpenAI, who have already invested $10 million in the project. The latest venture follows a similar initiative in July, the Frontier Model Forum.
The July forum was launched as “an industry body focused on ensuring safe and responsible development of frontier AI models” and said it would support safety research and work with policymakers to ensure such research was put into practice.
The AI Safety Fund was launched as the UK hosted the first global summit on AI safety. The British government’s Department for Science, Innovation & Technology said the forum will recognize the rapid pace of advancement in AI technology and bring stakeholders together to “understand emerging risks and opportunities to ensure they are properly addressed.”
Prime Minister Rishi Sunak said he wants his country to lead the way, and he hopes the conference will achieve critical objectives, including identifying the risks of AI and preventing the loss of human control over its advancement.
In a recent report, “Managing AI Risks in an Era of Rapid Progress,” Geoffrey Hinton, known as the “Godfather of AI,” said the technology could one day “take over” and replace humanity as the most intelligent force in the world. Mr. Hinton emphasizes the benefits of AI, particularly in medicine and drug development, but warned that thousands of job roles will disappear.
One notable concern raised by Hinton is the spread of so-called “fake news,” which could influence political decisions or cause public alarm. He also suggested that artificial intelligence could one day write code without human input, and the effect is not predictable or manageable. “One of the ways these systems might escape control is by writing their own computer code to modify themselves. That’s something we seriously need to worry about,” he said.