• English
  • Deutsch
  • Log In
    or
  • Research Outputs
  • Projects
  • Researchers
  • Institutes
  • Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Quantization Considerations of Dense Layers in Convolutional Neural Networks for Resistive Crossbar Implementation
 
  • Details
  • Full
Options
2020
Konferenzbeitrag
Titel

Quantization Considerations of Dense Layers in Convolutional Neural Networks for Resistive Crossbar Implementation

Abstract
The accuracy and power consumption of resistive crossbar circuits in use for neuromorphic computing is restricted by the process variation of the resistance-switching (memristive) device and the power overhead of the mixed-signal circuits, such as analog-digital converters (ADCs) and digital analog converters (DACs). Reducing the signal- and weight resolution can improve the robustness against process variation, relax requirements for mixed-signal devices, and simplify the implementation of crossbar circuits. This work aims to establish a methodology to achieve low-resolution dense layers for CNNs in terms of network architecture selection and quantization method. To this end, this work studies the impact of the dense layer configuration on the required resolution for its inputs and weights in a small convolutional neural network (CNN). This analysis shows that carefully selecting the network architecture for the dense layer can significantly reduce the required resolution for its input signals and weights. This work reviews criteria for appropriate architecture selection and the quantization method for the binary and ternary neural network (BNN and TNN) to reduce the weight resolution of CNN dense layers. Furthermore, this work presents a method to reduce the input resolution for the dense layer down to one bit by analyzing the distribution of the input values. A small CNN for inference with one-bit quantization for inputs signals and weights can be realized with only 0.68% accuracy degradation for MNIST Dataset.
Author(s)
Lei, Zhang
Borggreve, D.
Vanselow, F.
Brederlow, R.
Hauptwerk
9th International Conference on Modern Circuits and Systems Technologies, MOCAST 2020
Project(s)
TEMPO
TEMPO
Funder
European Commission EC
Bundesministerium für Bildung und Forschung BMBF (Deutschland)
Konferenz
International Conference on Modern Circuits and Systems Technologies (MOCAST) 2020
DOI
10.1109/MOCAST49295.2020.9200280
File(s)
N-605687.pdf (840.33 KB)
Language
Englisch
google-scholar
EMFT
Tags
  • convolutional neural ...

  • Neuromorphic Computin...

  • approximate computing...

  • Neural Network Quanti...

  • Resistive Crossbar

  • Memristive Devices

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Send Feedback
© 2022