Skip to main content

Efficient Algorithms for Mining Outliers from Large Data Sets

01 June 2000

New Image

An outlier in a set of data is an observation or point that is considerably dissimilar or inconsistent with the remainder of the data. In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its k sup (th) nearest neighbor. We rank each point on the basis of its distance to its k sup (th) nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it can be determined that they cannot contain outliers. Since people are usually interested in only a small number of outliers, our algorithm is able to determine very quickly that a significant number of the input points cannot be outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that our algorithm scales well with respect to both data set size and data set dimensionality.