Microsoft, Amazon, IBM Pledge to Publish AI Safety Measures for Models
Several leading technology companies pledge to publicly disclose risks posed by their AI models at AI Safety Summit.
May 29, 2024
This article originally appeared in AI Business.
Leading technology companies including Microsoft, Amazon, and IBM have pledged to publish the safety measures they’re taking when developing foundation models.
During the AI Safety Summit in Seoul, Korea, 16 companies agreed to publish safety frameworks on how they’re measuring AI risks as they’re building AI models.
The companies have all agreed not to develop or deploy an AI model if the risks it poses cannot be controlled or mitigated.
The pledge applies to foundation or “frontier” models – an AI model that can be applied to a broad range of applications, usually a multimodal system capable of handling images, text and other inputs.
Meta, Samsung, Claude developer Anthropic and Elon Musk’s startup xAI are among the signatories.
ChatGPT maker OpenAI, Dubai-based Technology Innovation Institute and Korean internet provider Naver also signed onto the Frontier AI Safety Commitments.
Zhipu AI, the startup building China’s answer to ChatGPT, was also among the companies that signed the Commitments which were developed by the UK and Korean governments.
“We are confident that the Frontier AI Safety Commitments will establish itself as a best practice in the global AI industry ecosystem and we hope that companies will continue dialogues with governments, academia and civil society and build cooperative networks with the AI Safety Institute in the future,” said Lee Jong Ho, Korea’s minister of science and information and communication technology.
Each company that has agreed to the commitments will publicly outline the level of risks their foundation models pose and what they plan to do to ensure they’re safe for deployment.
The signatories will have to publish their findings ahead of the next AI safety summit, taking place in France in early 2025.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” said U.K. Prime Minister Rishi Sunak.
The commitments are designed to build upon the Bletchley Agreement signed at the inaugural AI Safety Summit last November, which classifies and categorizes AI risks.
The commitment from tech companies is a welcome one, according to Beatriz Sanz Saiz, EY’s global consulting data and AI leader.
“Providing transparency and accountability is essential in the development and implementation of trustworthy AI,” Saiz said. “While AI has vast potential for businesses and individuals alike, this potential can only be harnessed through a conscientious and ethical approach to its development.”
“Companies that use AI should prioritize ethical considerations and responsible data practices in order to build customer trust,” said Sachin Agrawal, Zoho UK’s managing director. “Adopting the right AI procedures could mean going further than current privacy regulations and considering what is most ethical to balance the benefits of AI without compromising customer data and to ensure any practices are fully transparent.”
Read more about:
AI BusinessAbout the Author
You May Also Like