2 min read

OpenAI CEO Urges AI Regulation in Congressional Debut

OpenAI CEO Urges AI Regulation in Congressional Debut

OpenAI's CEO, Sam Altman, made his debut appearance before the US Congress, advocating for regulation in the rapidly evolving field of artificial intelligence (AI). While testifying before the Senate judiciary committee, he expressed his belief in the need for regulatory measures to harness the potential of AI while limiting its downsides.

Altman expressed his vision of government intervention being crucial in managing the risks posed by increasingly advanced AI models. He proposed the idea of implementing licensing and testing regulations for AI development, a set of safety standards, and a distinct test that models need to pass before they are operational. He further suggested allowing independent auditors to scrutinize these models prior to their launch. He also criticized the inadequacy of existing regulatory frameworks such as Section 230 in managing these novel technologies.

Both Altman and Gary Marcus, a professor emeritus of psychology and neural science at New York University who also testified at the hearing, underscored the need for a new regulatory agency focusing exclusively on AI, given its complexity and rapid pace of development.

The Senate hearing was largely constructive, with many lawmakers acknowledging Altman's call for regulation and his recognition of the possible pitfalls of generative AI. Gary Marcus, despite his skepticism towards the technology, praised Altman's sincerity.

The session occurred amidst warnings from renowned AI experts and ethicists, including former Google researchers Dr. Timnit Gebru and Meredith Whitaker, about the hurried adoption and hype surrounding generative AI. Whitaker, now president of secure messaging app Signal, recently criticized the portrayal of this technology as a magical solution for social good.

Senators Josh Hawley and Richard Blumenthal indicated that this hearing is just the beginning of the legislative journey towards understanding and managing the technology.

While acknowledging the potential of AI to tackle major challenges like climate change and cancer, Altman cautioned that the current AI systems are not yet capable of delivering on these promises. However, he expressed confidence that the benefits of the deployed tools far outweigh the risks, highlighting OpenAI's commitment to thorough testing and safety measures before releasing any new system.

Altman conceded that AI technology will significantly impact the job market, but remained optimistic about the creation of new, improved job opportunities. He emphasized the notion of AI as a tool, rather than a creature, and that it's adept at tasks, not jobs.

Altman expressed concerns about the effects of large language model services on elections and misinformation, especially in the context of upcoming primaries. He assured the committee of the measures OpenAI is taking to prevent misuse of AI technologies, like generating misinformation.

However, Altman did not have a definitive solution for compensating content creators whose work is used in AI-generated output, stating that the company is in talks with artists and other stakeholders to figure out an appropriate economic model. He expressed a willingness to help local news organizations, whose content is often used to train these models.

The potential danger of a few power players dominating the industry, a situation that could reinforce existing power dynamics, was briefly discussed but remained a largely unexplored topic during the hearing. Meredith Whitaker has previously warned about the risk of a handful of companies controlling the creation and distribution of AI technologies, which could serve their economic interests.