Per-sample Prediction Intervals for Extreme Learning Machines
01 May 2019
Prediction Intervals in supervised Machine Learning bound the region where the true outputs of new samples may fall. They are necessary for separating reliable predictions of a trained model from near random guesses, minimizing the rate of False Positives, and for other problem-specific tasks in applied Machine Learning. Stochastic projection functions ii many real problems do not correspond to homoscedastic assumption of the noise in a dataset, and the input-independent variance of noise computed by Mean Squared Error performs poorly in such cases. The paper proposes to use the weighted Jackknife estimator of the output weights variance, and a methodology to compute input-specific Prediction Intervals from that variance. The key features of the proposed methodology are robustness to heteroscedastic noise, standard formulation of ELM, short runtime and feasibility for large datasets.