Learning and Generalization in Layered Neural Networks: The Contiguity Program.

01 January 1988

New Image

Learning in layered neural networks is formulated as an iterative reduction of the intrinsic entropy of the chosen network architecture. The residual entropy of the trained network determines its generalization ability. Numerical simulations for the contiguity problem in networks with constrained architectures illustrate these concepts: the intrinsic entropy is reduced by restricting the receptive fields of the intermediate units, and the generalization ability of the trained networks improves systematically.