Top Tech Firms Sign White House Pledge To Mitigate AI Risks
Seven leading AI companies, including Google, Amazon, Microsoft and Meta, vowed to put in place new voluntary safeguards designed to minimize abuse and security risks.
July 21, 2023
The White House on Friday announced that seven of the most influential companies building artificial intelligence have agreed to a voluntary pledge to mitigate the risks of the emerging technology, escalating the Biden administration's involvement in the growing debate over AI regulation.
The companies - which include Google, Amazon, Microsoft, Meta and Chat GPT-maker OpenAI - vowed to allow independent security experts to test their systems before they are released to the public and committed to sharing data about the safety of their systems with the government and academics.
The firms also pledged to develop systems to alert the public when an image, video or text is created by artificial intelligence, a method known as "watermarking."
In addition to the tech giants, several newer businesses at the forefront of AI development signed the pledge, including Anthropic and Inflection. (Amazon founder Jeff Bezos owns The Washington Post. Interim CEO Patty Stonesifer sits on Amazon's board).
Several of the signers have already publicly agreed to some similar actions to those in the White House's pledge. Before OpenAI rolled it out its GPT-4 system widely, it brought in a team of running outside professions to exercises, a process known as "redteaming." Google has already said in a blog post it is developing a watermarking, which companies and policymakers have touted as a way to address concerns that AI could supercharge misinformation.
A senior White House official, who spoke on the condition of anonymity to discuss the pledge, said this would lead to higher standards across the industry.
"This is going to be pushing the envelope on what companies are doing and raising the standards for safety, security and trust of AI," the person said.
The White House signaled that this was just the beginning of its work on artificial intelligence. The administration is also developing an executive order focused on AI, and is supporting efforts in Congress to develop bipartisan legislation regulating the technology.
The White House official shared few specific details about the executive order or a timeline for when it would be released. The person said that the administration was reviewing the role of AI across government agencies and said it was a "high priority" for Biden.
Despite broad concerns about the growing power and influence of the tech sector, Congress has not passed comprehensive regulations of Silicon Valley and the Biden administration has attempted to use voluntary pledges as a stopgap measure. Nearly two years ago, the Biden administration sought public commitments from major tech companies to improve their cybersecurity practices at a similar White House summit.
Consumer advocates welcomed the pledge, but warned that tech companies have a checkered history of keeping their safety and security commitments.
"History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations," said Jim Steyer, the founder and CEO of the advocacy group Common Sense Media in a statement.
The White House announcement follows President Biden and Vice President Harris's recent flurry of AI meetings with top tech executives, researchers, consumer advocates and civil liberties groups.
The White House official said the administration was coordinating with Congress "quite a bit" on AI.
"These commitments do not change the need for legislative action," the person said. The official said that includes privacy legislation, which the administration says is necessary as AI develops.
Yet there are a bevy of different proposals in Congress to regulate AI, and key bipartisan measures are likely months away. Senate Majority Leader Charles E. Schumer (D-N.Y.) has formed a bipartisan group to work on AI legislation, which has spent the summer seeking briefings with top AI experts.
Meanwhile, government agencies are evaluating ways that they can use existing laws to regulate artificial intelligence. The Federal Trade Commission has opened an extensive probe into ChatGPT, sending the company a demand for documents about the data security practices of its product and times that it has made false statements.
--Cat Zakrzewski, The Washington Post
About the Author
You May Also Like