MLPerf Is Changing the AI Hardware Performance Conversation. Here’s how

What’s improving machine learning? Is it competition, or is it something else?

Scott Fulton III, Contributor

August 1, 2019

2 Min Read
Nvidia's DGX-2 supercomputer on display at GTC 2018
Nvidia's DGX-2 supercomputer on display at GTC 2018Yevgeniy Sverdlik

What should “performance” mean in the context of machine learning (ML)? With more supercomputers and cloud computing clusters supporting snap-judgment decisions every minute, artificial intelligence (AI) engineers are finding out it’s just as important to improve the way deep learning algorithms deliver their results as it is to improve the performance of the processors and accelerators that produce them.

Since May 2018, researchers from Baidu, Google, Harvard, Stanford, and UC Berkeley have been developing the MLPerf benchmark. It’s a tool that measures the amount of time consumed in training a machine learning model to the point where its inferences (the ability to make estimates or predictions) reach a 75.9 percent confidence level. ML system builders and architects have been invited to use MLPerf to gauge the accuracy and performance of their systems, and perhaps along the way to do a little bragging about them.

However, in June 2019 MLPerf was expanded, incorporating a new testing category into what’s suddenly become a suite of performance analysis tools. With the addition of an inference benchmark, MLPerf 0.5 can also clock the time algorithms consume post-training in using the data they’ve built up to reach conclusions.

Time-to-Solution

Not every person involved in the operation of an ML system will be a data scientist, says Ramesh Radhakrishnan, technology strategist at Dell EMC and a contributor to the MLPerf development group. Without a complete understanding of what’s going on under the hood, the mechanism of ML can be completely opaque. A simple measurement, such as the total number of reasonably correct predictions, can go a long way toward giving everyone involved in the ML management process a basic competency.

To read the rest of this free article, please fill out the form below:

 

 

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like