Robustness conditions of the LMS algorithm with time-variant matrix step-size
01 September 2000
Gradient-type algorithms commonly employ a scalar step-size, i.e., each entry of the regression vector is multiplied by the same value before updating the coefficients. More flexibility, however, is obtained when this step-size is of matrix size. It allows not only to individually scaling the entries of the regression vector but rotations and decorrelations are possible as well due to the choice of the matrix. A well-known example for the use of a fixed step-size matrix is the Newton-LMS algorithm. For such a fixed step-size matrix, conditions are well known under which a gradient-type algorithm converges. This article, however, presents robustness and convergence conditions for a least-mean-square (LMS) algorithm with time-variant matrix step-size. On the example of a channel estimator used in a cellular hand-phone, it is shown that the choice of a particular step-size matrix leads to considerable improvement over the fixed step-size case. (C) 2000 Elsevier Science B.V. All rights reserved.