On Tuesday, the CEO of the AI firm responsible for ChatGPT, Sam Altman, testified before Congress about the need for government engagement to reduce the hazards posed by more capable AI systems.
To license the most advanced AI systems and take that license away to ensure compliance with safety standards, Altman recommended the establishment of a U.S. or worldwide agency.
After introducing ChatGPT at the end of last year, his San Francisco-based firm shot to prominence. The free service provides chatbot responses that seem pretty human.
Educators’ initial worry about ChatGPT’s usage to cheat on homework assignments has already snowballed into a more significant fear that the newest generation of “generative AI” technologies may mislead people, disseminate falsehoods, breach copyright restrictions, and displace jobs.
U.S. agencies have promised to crack down on harmful AI products that break existing civil rights and consumer protection laws, and while there’s no indication Congress will create new AI rules, as Europeans are doing, concerns brought by Altman and other tech CEOs to the White House earlier this month were taken very seriously.
The hearing was opened by a recording that sounded like the voice of Connecticut Democrat Sen. Richard Blumenthal, who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law. However, the recording was a voice replica created by AI which taught itself how to speak like the senator. ChatGPT wrote the opening remarks.
Blumenthal claimed that the outcome was amazing, but he wondered what the ramifications would be if it provided an endorsement of Ukraine surrendering to Russia.
Concerned about how future AI systems may disrupt the employment market, Blumenthal said that AI companies should be compelled to evaluate their systems and disclose known hazards prior to releasing them. While Altman essentially agreed, he offered a more upbeat outlook on the future of labor.
When pressed to reveal his greatest worry regarding artificial intelligence, Altman generally avoided giving details, instead stating ominously that if the technology goes wrong, it can go quite wrong.
He advocated a new regulatory agency to enforce restrictions to prevent AI models from growing rogue, alluding to fears of powerful AI systems that may lure humans into giving up control in the far future.