Improved Bounds on the Sample Complexity of Learning

01 May 2001

New Image

(Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms) We present two improved bounds on the sample complexity of learning. First, we present a new general upper bound on the number of examples required to estimate all of the expectations of a set of random variables uniformly well. The quality of the estimates is measured using a variant of the relative error proposed by Haussler and Pollard. We also show that our bound is within a constant factor of the best possible. Our upper bound implies improved bounds on the sample complexity of learning according to Haussler's decision theoretic model. Next, we prove a lower bound on the sample complexity for learning according to the prediction model that is optimal to within a factor of 1 + o(1).