Intel Steps Up Its Challenge to Nvidia’s AI Chip Dominance, with Facebook’s Help
Plans to release processors for AI training and inference; still emerging, the latter is expected to become a bigger market than the former.
Intel is putting the finishing touches on a new family of Artificial Intelligence processors including one developed with Facebook’s assistance. It’s a collaboration that analysts say will help Intel make further inroads into AI and better compete against Nvidia and other rivals in the increasingly busy AI chip market.
Last week at the CES conference in Las Vegas, Intel executives announced that the company’s Nervana Neural Network Processor for Inference (NNP-I) will ship in the second half of 2019 and repeated what they had said in the past, that Facebook has been sharing its technical insights with Intel engineers to help develop the chip.
“We expect NNP-I… to deliver industry leading TOPS-per-watt or performance-per-watt on real production inference workloads,” Navin Shenoy, executive VP and general manager of Intel’s Data Center Group, said from stage at CES. “It will be important for many new inference applications like image recognition, image classification; and that will be important in new domains like social media.”
The company also announced that it expects to release a second AI chip, the Neural Network Processor for Training code-named “Spring Crest,” later this year.
Intel’s two new processors handle two general machine learning workload categories: training and inference. Deep learning algorithms must be trained on big datasets. Once trained, inference puts a trained model to work, to draw conclusions or make predictions.
Bigger Market Expected for Inferencing Chips than Training
Analysts say the new Nervana AI processors position Intel to go after a bigger share of the market for accelerator chips, or co-processors, used for deep learning, AI, machine learning, and data analytics workloads. According to IDC, the market for these accelerators, such as Graphics Processing Units, Field Programmable Gate Arrays (FGPA), many-core processors, and Application Specific Integrated Circuits (ASICs), is expected to grow from $15 billion in 2019 to $25.5 billion in 2022.
Nvidia’s GPUs have dominated the training market, so it’s important for Intel to have a product in the category to try to capture market share, Peter Rutten, research director with IDC’s Infrastructure Systems, Platforms and Technologies Group. But the market for inference chips is still emerging, so it’s an opportune time for Intel to release NNP-I, he said.
The options for inferencing available today include Nvidia GPUs and Xilinx and Intel’s FGPA offerings, in addition to numerous startup-developed parts. Cloud providers have been designing their own AI chips, including for inferencing – Amazon Web Services announced its custom Inferentia chip in November. Google is already on the third generation of its custom Tensor Processing Unit chips that do both training and inference.
“Inferencing will be a bigger market than training because there are only so many models you can train before you have a library of models,” Rutten said. “Intel really wants to own the inferencing market, which is why they are launching this chip.”
In 2016, Intel purchased Nervana Systems, a small AI software company that had plans to build neural network processors. According to Intel, its forthcoming Nervana Neural Network Processors are purpose-built for deep learning and features a new memory architecture; high-speed on- and off-chip interconnects, which enable massive bi-directional data transfer; and a new numerical format called Flexpoint, which results in a huge increase in parallelism.
Rutten said Intel has two advantages in the AI processor market: close relationships with server vendors and the opportunity to design both AI chips from the ground up. In contrast, Nvidia has had to continually redesign its GPU processors, whose architecture was initially architected for graphics processing, he said.
Having said that, Nvidia continues to innovate aggressively and has succeeded so far in being the market share leader in the accelerated computing market.
Why the Facebook Partnership Is a Big Deal
Intel said that Facebook was sharing its technical expertise to help develop Nervana in October 2017.
Intel and Facebook reiterated that partnership in a CES press conference. “Facebook is pleased to be partnering with Intel on a new generation of power-optimized highly tuned AI inference chip that will be a leap in inference workload acceleration,” a Facebook statement read.
The partnership is important for Intel as it prepares to bring NNP-I to market, because it enables the chipmaker to optimize the processor for a hyperscale data center environment, Rutten said.
“You get the knowledge and expertise of the hyperscaler,” he said. “When you put your processors in their servers and start to optimize that and start learning from it and applying models to it, that’s a huge deal because the scale Facebook offers you is unmatched. It all adds enormously to the credibility of what you want to ultimately put in the market.”
While Facebook has not publicly stated that it is purchasing Intel’s NNP-I processor, having Facebook as a major customer would be a huge win for Intel.
“Ultimately, if Facebook decides to buy your processor, they will buy it at scale and that is instantly huge,” Rutten said.
About the Author
You May Also Like