Options
1992
Conference Paper
Title
Shape, position and size invariant visual pattern recognition based on principles of neocognitron and perceptron
Abstract
A supervised learning feedforward neural net, which combines the advantages of Neocognitron and Perceptron, is introduced. The net topology is constrained to local connections between layers. A weight-sharing technique is used similiar to Fukushima's Neocognitron network model. Thus objects deformed in shape, shifted in position or variing in size can be recognized without any prepocessing or normalisation. In contrast ot the Neocognitron layers containing simple cells (S-cells), complex cells (C-cells) and inhibitory cells (V-cells) are subsituted by layers built up from McCulloch-Pitts neurons with sigmoidal nonlinearity. Additionally, in order to train each layer independently the supervised learning algorithm of the Neocognitron is replaced by the Least-Mean-Square learning rule commonly used for Perceptrons. In a first application the network has been successfully trained to recognize handwritten numerals of different size and position within a 24 by 24 pixel image.