Fast Recursive Estimation Using the Lattice Structure
01 January 1982
Gradient algorithms are widely used in adaptive tapped delay-line filters, such as equalizers, to derive a set of tap coefficients that gives the desired output with a minimum mean square error (mmse). It is widely recognized1 that when the input samples presented to the adaptive system are highly correlated, convergence to the optimum filter coefficients is slow. An important contribution to solving this problem of slow convergence was made by Godard2 who obtained an adaptive algorithm that minimizes the total mse at all instants of time. Consequently, the Godard algorithm has the fastest possible rate of convergence in an mmse sense, and is usually referred to as the optimal mean-square adaptive estimator. This algorithm has the structure of a Kalman filter, and its complexity is on the order of N2 multiplications 97 and additions per iteration, where N is the number of filter coefficients being adjusted. Fast convergence results when successive corrections to the coefficients' vector are adaptively decorrelated. Based on this observation, other practical, less complex, schemes of orthogonalizing the corrections were proposed, see for example Ref. 1. Recently, an efficient (or "fast") computing procedure, called the fast Kalman algorithm, was obtained which provides a fast-converging estimator identical to that of Godard, but which requires only on the order of 10 N multiplications.3,4,6 Another approach to accelerated convergence is to transform the input data to obtain uncorrelated inputs to the estimator.6 When the characteristics of the channel are fixed and known, the transformation can be found from the data autocorrelation matrix.