Taipei, Friday, Apr 26, 2024, 06:13

News

Toshiba Develops High-Speed Algorithm and Hardware Architecture for Deep Learning Processor

Published: Nov 06,2018

Toshiba Memory Corporation today announced the development of a high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning processing with less degradations of recognition accuracy. The new processor for deep learning implemented on an FPGA achieves 4 times energy efficiency compared to conventional ones. The advance was announced at IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan on November 6.

More on This

Toshiba Releases 650V Super Junction Power MOSFETs in TOLL Package

Toshiba has launched 650V super junction power MOSFETs, TK065U65Z, TK090U65Z, TK110U65Z, TK155U65Z and TK190U65Z, in its...

Toshiba Launches 100V High-Current Photorelay for Industrial Equipment

Toshiba has launched "TLP241B," a high-current photorelay in a DIP4 package for industrial equipment such as programmable logic controllers and I/O interfaces...

Deep learning calculations generally require large amounts of multiply-accumulate (MAC) operations, and it has resulted in issues of long calculation time and large energy consumption. Although techniques reducing the number of bits to represent parameters (bit precision) have been proposed to reduce the total calculation amount, one of proposed algorithm reduces the bit precision down to one or two bit, those techniques cause degraded recognition accuracy.

Toshiba Memory developed the new algorithm reducing MAC operations by optimizing the bit precision of MAC operations for individual filters in each layer of a neural network. By using the new algorithm, the MAC operations can be reduced with less degradation of recognition accuracy.

Furthermore, Toshiba Memory developed a new hardware architecture, called bit-parallel method, which is suitable for MAC operations with different bit precision. This method divides each various bit precision into a bit one by one and can execute 1-bit operation in numerous MAC units in parallel. It significantly improves utilization efficiency of the MAC units in the processor compared to conventional MAC architectures that execute in series.

Toshiba Memory implemented ResNet50, a deep neural network, on an FPGA using the various bit precision and bit-parallel MAC architecture. In the case of image recognition for the image dataset of ImageNet, the above technique supports that both operation time and energy consumption for recognizing image data are reduced to 25 % with less recognition accuracy degradation, compared to conventional method.

CTIMES loves to interact with the global technology related companies and individuals, you can deliver your products information or share industrial intelligence. Please email us to en@ctimes.com.tw

1698 viewed

Most Popular

comments powered by Disqus