Intel Unveils Powerful New AI Chips

By Amit Chowdhry • Nov 14, 2019
  • Intel has unveiled its next wave of AI with updates on new products designed for accelerating AI system development and deployment

Intel recently unveiled the next wave of artificial intelligence (AI) with updates on new products designed to accelerate AI system development and deployment from cloud to edge. Specifically, Intel demonstrated its Intel Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000) — which is Intel’s first purpose-built ASICs for complex deep learning with incredible scale and efficiency for cloud and data center customers. Plus Intel also revealed its next-generation Intel Movidius Vision Processing Unit (VPU) for edge media, computer vision, and inference applications.

“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory,” said Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group Naveen Rao. “Purpose-built hardware like Intel Nervana NNPs and Movidius VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge.”

Intel pointed out that these products further strengthen its portfolio of AI solutions, which is expected to generate more than $3.5 billion in revenue in 2019. And Intel’s AI portfolio helps customers enable AI model development and deployment at any scale from massive clouds to tiny edge devices and everything in between.

The new Intel Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration for maximum use. These NNPs are already in production and are being delivered to customers.

Intel’s Nervana NNP-T strikes the balance between computing, communication, and memory thus allowing near-linear and energy-efficient scaling from small clusters up to the largest pod supercomputers. And the Intel Nervana NNP-I is power- and budget-efficient and ideal for running intense and multimodal inference at real-world scale using flexible form factors. Both of these products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.

“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” added Misha Smelyanskiy, director of AI System Co-Design at Facebook.

Intel’s next-generation Intel Movidius VPU — which scheduled to be available in the first half of 2020 — incorporates unique and highly efficient architectural advances that are expected to deliver leading performance (more than 10 times the inference performance as the previous generation and up to 6 times the power efficiency of competitor processors).

And Intel also announced its new Intel DevCloud for the Edge — which along with the Intel Distribution of OpenVINO toolkit — addresses a key pain point for developers. It allows them to try, prototype, and test AI solutions on a broad range of Intel processors before they buy hardware.