Taipei, Monday, Dec 16, 2019, 08:22

News

Intel Demonstrates Nervana NNP for Cloud, Movidius Myriad VPU for Edge

Published: Nov 13,2019

Intel demonstrated its Intel Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000) — Intel's first purpose-built ASICs for complex deep learning with scale and efficiency for cloud and data center customers. Intel also revealed its next-generation Intel Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications.

More on This

Intel and MediaTek Partner to Deliver 5G on the PC

Intel is partnering with Taiwan’s MediaTek on the development, certification and support of 5G modem solutions for the next generation of PC experiences...

Astera Labs Accelerates PCI Express 5.0 System Deployment in Collaboration with Intel and Synopsys

Astera Labs Inc., in collaboration with Synopsys, Inc. and Intel, announced an industry-first demonstration of a complete PCI Express (PCIe) 5...

These products further strengthen Intel's portfolio of AI solutions, which is expected to generate more than $3.5 billion in revenue in 2019. The broadest in breadth and depth in the industry, Intel's AI portfolio helps customers enable AI model development and deployment at any scale from massive clouds to tiny edge devices, and everything in between.

Now in production and being delivered to customers, the new Intel Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration for maximum use.

The Intel Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. The Intel Nervana NNP-I is power- and budget-efficient and ideal for running intense, multimodal inference at real-world scale using flexible form factors. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.

"We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I," said Misha Smelyanskiy, director, AI System Co-Design at Facebook.

Additionally, Intel's next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.

Intel also announced its new Intel DevCloud for the Edge, which along with the Intel Distribution of OpenVINO toolkit, addresses a key pain point for developers — allowing them to try, prototype and test AI solutions on a broad range of Intel processors before they buy hardware.

CTIMES loves to interact with the global technology related companies and individuals, you can deliver your products information or share industrial intelligence. Please email us to en@ctimes.com.tw

371 viewed

comments powered by Disqus