On Friday, United States President Joe Biden announced that leading AI companies, including OpenAI, Alphabet and Meta Platforms have voluntarily committed to implementing safety measures to render artificial intelligence technology safer. These commitments represent a hopeful progression in the ongoing quest for AI safety, but, as Biden emphasised, “We have a lot more work to do together.”
The President’s announcement was made at a White House event designed to address the mounting concerns about the potential misuse of artificial intelligence for disruptive purposes. “We must be clear-eyed and vigilant about the threats from emerging technologies” to U.S. democracy, Biden stated.
Notably, this collective move, also involving Anthropic, Inflection, Amazon.com (AMZN.O) and Microsoft (MSFT.O), an OpenAI partner, is viewed as a victory for the Biden administration’s efforts to regulate the booming AI technology sector. The companies pledged to rigorously test systems prior to their release, sharing information about reducing risks and enhancing cybersecurity investment.
The acknowledgment of such responsibility is welcomed by industry players. “We welcome the president’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public,” Microsoft shared in a blog post on Friday.
The meteoric rise of generative AI this year, which uses data to create new content akin to ChatGPT’s human-like prose, has prompted lawmakers worldwide to contemplate how to alleviate potential threats to national security and economic stability that may arise from this emerging technology.
Despite these efforts, the U.S. has yet to match the EU in terms of concrete AI regulation. In June, EU lawmakers approved a draft of rules requiring systems like ChatGPT to disclose AI-generated content and help distinguish deep-fake images from real ones, as well as guarantee safeguards against illegal content.
U.S. Senate Majority Leader Chuck Schumer called for “comprehensive legislation” to ensure safeguards on artificial intelligence in June. Congress is currently contemplating a bill that would necessitate political ads to disclose any AI-created imagery or content.
Biden, having hosted executives from the seven companies at the White House on Friday, disclosed his ongoing work on an executive order and bipartisan legislation on AI technology. He anticipated a future marked by rapid tech transformation, noting, “We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years.”
As part of these commitments, the seven companies pledged to develop a system for “watermarking” all forms of AI-generated content, from text and images to audios and videos. This technical watermarking is intended to help users identify when AI has been used, potentially flagging deep-fake imagery or audios that could be misused to falsify events or manipulate the perception of individuals, such as politicians.
The mechanics of how this watermark will be detected when information is shared remain uncertain. However, the companies have committed to prioritising user privacy and ensuring their technology is bias-free and non-discriminatory towards vulnerable groups. Other commitments encompass leveraging AI to address scientific challenges such as medical research and climate change mitigation.
This collective move towards safer AI practices represents a promising future of technology that is not just intelligent but also responsible.