• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Quantization Considerations of Dense Layers in Convolutional Neural Networks for Resistive Crossbar Implementation
 
  • Details
  • Full
Options
2020
Conference Paper
Title

Quantization Considerations of Dense Layers in Convolutional Neural Networks for Resistive Crossbar Implementation

Abstract
The accuracy and power consumption of resistive crossbar circuits in use for neuromorphic computing is restricted by the process variation of the resistance-switching (memristive) device and the power overhead of the mixed-signal circuits, such as analog-digital converters (ADCs) and digital analog converters (DACs). Reducing the signal- and weight resolution can improve the robustness against process variation, relax requirements for mixed-signal devices, and simplify the implementation of crossbar circuits. This work aims to establish a methodology to achieve low-resolution dense layers for CNNs in terms of network architecture selection and quantization method. To this end, this work studies the impact of the dense layer configuration on the required resolution for its inputs and weights in a small convolutional neural network (CNN). This analysis shows that carefully selecting the network architecture for the dense layer can significantly reduce the required resolution for its input signals and weights. This work reviews criteria for appropriate architecture selection and the quantization method for the binary and ternary neural network (BNN and TNN) to reduce the weight resolution of CNN dense layers. Furthermore, this work presents a method to reduce the input resolution for the dense layer down to one bit by analyzing the distribution of the input values. A small CNN for inference with one-bit quantization for inputs signals and weights can be realized with only 0.68% accuracy degradation for MNIST Dataset.
Author(s)
Lei, Zhang
Borggreve, D.
Vanselow, F.
Brederlow, R.
Mainwork
9th International Conference on Modern Circuits and Systems Technologies, MOCAST 2020  
Project(s)
TEMPO
TEMPO  
Funder
European Commission EC  
Bundesministerium für Bildung und Forschung BMBF (Deutschland)  
Conference
International Conference on Modern Circuits and Systems Technologies (MOCAST) 2020  
Open Access
File(s)
Download (840.33 KB)
Rights
Use according to copyright law
DOI
10.24406/publica-r-409027
10.1109/MOCAST49295.2020.9200280
Language
English
Fraunhofer-Einrichtung für Mikrosysteme und Festkörper-Technologien EMFT  
Keyword(s)
  • convolutional neural network

  • Neuromorphic Computing Hardware

  • approximate computing

  • Neural Network Quantization

  • Resistive Crossbar

  • Memristive Devices

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024