Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

A Novel DRAM-Based Process-in-Memory Architecture and its Implementation for CNNs

: Sudarshan, Chirag; Soliman, Taha; Parra, Cecilia De la; Weis, Christian; Ecco, Leonardo; Jung, Matthias; Wehn, Norbert; Guntoro, Andre


Association for Computing Machinery -ACM-:
26th Asia and South Pacific Design Automation Conference, ASPDAC 2021. Proceedings : Tokyo Japan, January 2021
New York: ACM, 2021
ISBN: 978-1-4503-7999-1
Asia and South Pacific Design Automation Conference (ASPDAC) <26, 2021, Online>
European Commission EC
H2020-ECSEL; 826655; TEMPO
Technology and hardware for neuromorphic computing
European Commission EC
H2020-FET Proactive; 732631; OPRECOMP
Open transPREcision COMPuting
Fraunhofer IESE ()
Convolution Neural Networks; DRAM; Processing-in-Memory

Processing-in-Memory (PIM) is an emerging approach to bridge the memory-computation gap. One of the key challenges of PIM architectures in the scope of neural network inference is the deployment of traditional area-intensive arithmetic multipliers in memory technology, especially for DRAM-based PIM architectures. Hence, existing DRAM PIM architectures are either confined to binary networks or exploit the analog property of the sub-array bitlines to perform bulk bit-wise logic operations. The former reduces the accuracy of predictions, i.e. Quality-of-results, while the latter increases overall latency and power consumption. In this paper, we present a novel DRAM-based PIM architecture and implementation for multi-bit-precision CNN inference. The proposed implementation relies on shifter based approximate multiplications specially designed to fit into commodity DRAM architectures and its technology. The main goal of this work is to propose an architecture that is fully compatible with commodity DRAM architecture and to maintain a similar thermal design power (i.e. < 1 W). Our evaluation shows that the proposed DRAM-based PIM has a small area overhead of 6.6% when compared with an 8 Gb commodity DRAM. Moreover, the architecture delivers a peak performance of 8.192 TOPS per memory channel while maintaining a very high energy efficiency. Finally, our evaluation also shows that the use of approximate multipliers results in a negligible drop B@in prediction-accuracy (i.e. < 2 %) in comparison with conventional CNN inference that relies on traditional arithmetic multipliers.