The United States should require companies to obtain a government license if they want to develop powerful artificial intelligence systems, the head of one of the country’s top artificial intelligence companies said at a Senate committee hearing on Tuesday.
In his first appearance before Congress, Sam Altman, CEO of OpenAI, the company that developed ChatGPT, said that the US “may consider a combination of licensing and testing requirements for the development and release of AI models above of a threshold of capacities”.
Altman along with Christina Montgomery, IBM’s director of privacy and trust, and Gary Marcus, emeritus professor of psychology and neural sciences at New York University, were witnesses at a hearing held by the US Senate Judiciary Committee on discuss AI oversight. .
The rapid advancement of AI systems like ChatGPT has prompted many top technologists and academics to call on industry to halt some development, and to call for government intervention if it doesn’t.
But there is little consensus on what the general regulations would look like, and Congress and federal agencies have had trouble figuring out their role. On the eve of his testimony, Altman made a presentation to about 60 legislators, amazed many of them and they stayed almost two hours later answering questions.
Unlike previous hearings with tech leaders like Facebook co-founder Mark Zuckerberg or TikTok CEO Shou Zi Chew, where members of Congress were sometimes openly hostile, the senators were mostly sympathetic to Altman and Montgomery. .
Sen. Richard Blumenthal, D-Conn., suggested that artificial intelligence models could be required to reveal the information they were trained on.
“Should we consider independent testing laboratories to provide nutrition scorecards and labels or the equivalent of nutrition labels? Packaging that tells people whether or not the contents can be trusted, what are the ingredients? Blumenthal said.
Altman responded positively to the idea.
After Sen. Josh Hawley, R-Mo., suggested that people could use AI as a source of trusted information, Altman said again that he believes the government has a role to play in regulating technology.
“I think some regulation would be quite wise on this issue,” Altman said. «People need to know if they’re talking to an AI, if the content they’re looking at could be generated or not.»
But Altman also said he believes the public will learn to adapt to an AI-assisted deluge of false information and media.
“When Photoshop came on the scene a long time ago, for a while people were pretty fooled by photoshopped images and then quickly understood that images were photoshopped,” he said. «This will be like this, but on steroids.»
Marcus said the AI industry is not far from taking advantage of the vast amounts of personal data companies hold about people to better exploit their tastes, even if OpenAI and IBM won’t do it directly.
“Hypertargeting is definitely coming,” Marcus said. “We will definitely see it. Technology is part of the way to be able to do that and we will certainly do that.”
Some industry figures have called for the government to leave AI regulation entirely to the companies behind the technology. Eric Schmidt, a former Google chief executive, said on «Meet the Press» on Sunday that «there’s no way a non-industry person could understand what’s possible.»
Marcus called for a cabinet-level organization within the United States to oversee the AI.
“The amount of risk is great. The amount of information to keep up with is a lot,” she said.
This is a developing story. Please check for updates.