Meta, IBM Working With Startup to Test AI Model Safety

HydroX AI is working with big-name tech firms and the AI Alliance to evaluate generative AI models in industries such as healthcare and finance.

Ben Wodecki

July 16, 2024

2 Min Read
Gen AI security image
Image: Alamy

This article originally appeared in AI Business.

HydroX AI, a startup developing tools to secure AI models and services, is teaming with Meta and IBM to evaluate generative AI models deployed in high-risk industries.

Founded in 2023, the San Jose, California-based company built an evaluation platform that lets businesses test their language models to determine their safety and security.

HydroX will work with Meta and IBM to evaluate language models across sectors including health care, financial services and legal.

The trio will work to create benchmark tests and toolsets to help business developers ensure their language models are safe and compliant before being used in industry-specific deployments.

“Each domain presents unique challenges and requirements, including the need for precision/safety, adherence to strict regulatory standards, and ethical considerations,” said Victor Bian HydroX’s chief of staff.

“Evaluating large language models within these contexts ensures they are safe, effective, and ethical for domain-specific applications, ultimately fostering trust and facilitating broader adoption in industries where errors can have significant consequences.”

Benchmarks and related tools are designed to evaluate the performance of a language model, providing model owners with an assessment of their model's outputs on specific tasks.

Related:US, UK Form Historic Alliance on AI Safety, Testing

HydroX claims there are not enough tests and tools out there to allow model owners to ensure their systems are safe for use in high-risk industries.

The startup is now working with two major tech companies that have experience working on AI safety.

AI Focus

Meta previously built Purple Llama, a suite of tools designed to ensure its Llama line of AI models is deployed securely. IBM meanwhile was among the tech companies that pledged to publish the safety measures they’ve taken when developing foundation models at the recent AI Safety Summit in Korea.

Meta and IBM were founding members of the AI Alliance, an industry group looking to foster responsible open AI research. HydroX has also joined and will contribute its evaluation resources while working alongside other member organizations.

Read more of the latest AI data center news

“Through our work and conversations with the rest of the industry, we recognize that addressing AI safety and security concerns is a complex challenge while collaboration is key in unlocking the true power of this transformative technology,” Bian said. “ It is a proud moment for all of us at HydroX AI and we are hyper-energized for what is to come.”

Related:AI Safety Institute Launches AI Model Safety Testing Tool Platform

Other members of the AI Alliance include AMD, Intel, Hugging Face and universities including Cornell and Yale.

Read more about:

AI Business

About the Author

Ben Wodecki

AI Business

Ben Wodecki is assistant editor at AI Business, a publication dedicated to the latest trends in artificial intelligence.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like