At the 2020 International Interconnect Technology Conference, imec demonstrates for the first time electrically function...
This week at the virtual conference IEEE RFIC, leading research and innovation hub imec presents a millimeter-wave motion detection radar at 60GHz...
Since the early days of the digital computer age, the processor has been separated from the memory. Operations performed using a large amount of data require a similarly large number of data elements to be retrieved from the memory storage. This limitation, known as the von Neumann bottleneck, can overshadow the actual computing time, especially in neural networks – which depend on large vector matrix multiplications.
These computations are performed with the precision of a digital computer and require a significant amount of energy. However, neural networks can also achieve accurate results if the vector-matrix multiplications are performed with a lower precision on analog technology.
To address this challenge, imec and its industrial partners in imec’s industrial affiliation machine learning program, including GF, developed a new architecture which eliminates the von Neumann bottleneck by performing analog computation in SRAM cells. The resulting Analog Inference Accelerator (AnIA), built on GF’s 22FDX semiconductor platform, has exceptional energy efficiency.
Characterization tests demonstrate power efficiency peaking at 2,900 tera operations per second per watt (TOPS/W). Pattern recognition in tiny sensors and low-power edge devices, which is typically powered by machine learning in data centers, can now be performed locally on this power-efficient accelerator.
“The successful tape-out of AnIA marks an important step forward toward validation of Analog in Memory Computing (AiMC),” said Diederik Verkest, program director for machine learning at imec. “The reference implementation not only shows that analog in-memory calculations are possible in practice, but also that they achieve an energy efficiency ten to hundred times better than digital accelerators. In imec’s machine learning program, we tune existing and emerging memory devices to optimize them for analog in-memory computation. These promising results encourage us to further develop this technology, with the ambition to evolve towards 10,000 TOPS/W".
“GlobalFoundries collaborated closely with imec to implement the new AnIA chip using our low-power, high-performance 22FDX platform,” said Hiren Majmudar, vice president of product management for computing and wired infrastructure at GF. “This test chip is a critical step forward in demonstrating to the industry how 22FDX can significantly reduce the power consumption of energy-intensive AI and machine learning applications.”
Looking ahead, GF will include AiMC as a feature able to be implemented on the 22FDX platform for a differentiated solution in the AI market space. GF’s 22FDX employs 22nm FD-SOI technology to deliver outstanding performance at extremely low power, with the ability to operate at 0.5 Volt ultralow power and at 1 pico amp per micron for ultralow standby leakage. 22FDX with the new AiMC feature is in development at GF’s state-of-the-art 300mm production line at Fab 1 in Dresden, Germany.
CTIMES loves to interact with the global technology related companies and individuals, you can deliver your products information or share industrial intelligence. Please email us to email@example.com
- 1TIE Showcases Taiwan's Smart Value and R&D Capacities of Government Agencies
- 2TrendForce Announces 10 Tech Industry Trends for 2021
- 3Realtek Second Generation 2.5GbE Provides an Advanced Ethernet Solution for Modern Demands
- 4MicroSAM – The New Microcontroller-Agnostic Module Form Factor for the Enablement of Smart Sensors
- 5Lextar Partners With X Display to Accelerate MicroLED Mass Production