Perspectives from the informational complexity of learning
01 January 2000
We discuss two seemingly disparate problems of learning from examples within the framework of statistical learning theory. The first involves real-valued function learning using neural networks and an analysis of this has two interesting aspects (1) it shows how the generalization ability of a learner is bounded both by finite data and limited representational capacity (2) it shifts attention away from asymptotics to learning with finite resources. The perspective that this yields is then brought to bear on the second problem of learning natural language grammars to articulate some issues that computational linguistics needs to deal with