Optimum edge recognition using matched filters for machine vision.

01 January 1987

New Image

The problem of optimum recognition of edges in a visual scene is examined from a matched filter perspective. Optimum edge filters are derived and it is shown that the common belief that good detection and good localization are in opposition is incorrect. Rather, there is a coupled relationship between detection and localization. More importantly, the performance of the linear matched filter for edge detection cannot be exceeded by any form of non-linear filter. The detection filter is matched to the form of the gradient of the edge normal. Optimum localization consists of simply differentiating the output from the detection filter and looking for zeros. In one-dimensions, the Marr-Hildreth edge operator approximates to the optimum. However, in two-dimensions, this is not so, since optimum 2- D operators are shown to be radially asymmetric. Finally, a noise analysis of machine vision cameras is included which indicates that the form of the matched filters is a function of the scene illumination.