Session: SPTM-L3
Time: 9:30 - 11:30, Friday, May 11, 2001
Location: Room 251 D
Title: Adaptive RLS Filters
Chair: Ali Sayed

9:30, SPTM-L3.1
A HUBER RECURSIVE LEAST SQUARES ADAPTIVE LATTICE FILTER FOR IMPULSE NOISE SUPPRESSION
Y. ZOU, S. CHAN
This paper proposes a new adaptive filtering algorithm called the Huber Prior Error-Feedback Least Squares Lattice (H-PEF-LSL) algorithm for robust adaptive filtering in impulse noise environment. It minimizes a modified Huber M-estimator based cost function, instead of the least squares cost function. In addition, the simple modified Huber M-estimate cost function also allows us to perform the time and order recursive updates in the conventional PEF-LSL algorithm so that the complexity can be significantly reduced to O(M), where is the length of the adaptive filter. The new algorithm can also be viewed as an efficient implementation of the recursive least M-estimate (RLM) algorithm recently proposed by the authors [1], which has a complexity of O(M^2) . Simulation results show that the proposed H-PEF-LSL algorithm is more robust than the conventional PEF-LSL algorithm in suppressing the adverse influence of the impulses at the input and desired signals with small additional computational cost.

9:50, SPTM-L3.2
KAGE: A NEW FAST RLS ALGORITHM
I. SKIDMORE, I. PROUDLER
A new fast Recursive Least Squares (RLS) algorithm is introduced. By making use of RLS interpolation as well prediction, the algorithm generates the transversal filter weights without suffering the poor numerical attributes of the FTF algorithm. The Kalman gain vector is generated at each time step in terms of interpolation residuals. The interpolation residuals are calculated in an order recursive manner. For an Nth order problem the procedure requires O(NlogN) operations. This is achieved via a divide and conquer approach. Computer simulations suggest the new algorithm is numerically robust, running successfully for many millions of iterations.

10:10, SPTM-L3.3
NONLINEAR RLS ALGORITHM USING VARIABLE FORGETTING FACTOR IN MIXTURE NOISE
C. SO, S. LEUNG
In impulsive noise environment, most learning algorithms are encountered difficulty in distinguishing the nature of large error signal, whether caused by the impulse noise or model error. Consequently, they suffer from large misadjustment or otherwise slow convergence. A new nonlinear RLS (VFF-NRLS) adaptive algorithm with variable forgetting factor for FIR filter is introduced. In this algorithm, the autocorrelations of non-zero lags, which is insensitive to white noise, is used to control forgetting factor of the nonlinear RLS. This scheme makes the algorithm have fast tracking capability and small misadjustment. By experimental results, it is shown that the new algorithm can outperform other RLS algorithms.

10:30, SPTM-L3.4
AN ADAPTIVE RLS SOLUTION TO THE OPTIMAL MINIMUM POWER FILTERING PROBLEM WITH A MAX/MIN FORMULATION
Z. TIAN, K. BELL
In signal processing, there are problems where the processed output energy is maximized while the noise component is minimized. This gives rise to a max/min problem, which is equivalent to a generalized eigenvalue problem. Exemplary applications of the max/min formulation have been seen in Capon's blind beamforming method and the blind minimum output energy detection in CDMA wireless communications. The solution to such a problem involves eigen-decomposition of a transformed data covariance matrix inverse, which is computationally expensive to implement. This paper offers an adaptive RLS solution to the optimal minimum power filtering problem without involving eigen-decompositions. It is based on a new Recursive Least Square updating procedure that works for multiple linear constraints, and uses a one-dimensional subspace tracking method to update the filter weights. The performance is comparable with that of using the direct eigen-decomposition and matrix inversion.

10:50, SPTM-L3.5
A ROBUST FAST RECURSIVE LEAST SQUARES ADAPTIVE ALGORITHM
J. BENESTY, T. GANSLER
Very often, in the context of system identification, the error signal which is by definition the difference between the system and model filter outputs is assumed to be zero-mean, white, and Gaussian. In this case, the least squares estimator is equivalent to the maximum likelihood estimator and hence, it is asymptotically efficient. While this supposition is very convenient and extremely useful in practice, adaptive algorithms optimized on this, may be very sensitive to minor deviations from the assumptions. We propose here to model this error with a robust distribution and deduce from it a robust fast recursive least squares adaptive algorithm (least squares is a misnomer here but convenient to use). We then show how to successfully apply this new algorithm to the problem of network echo cancellation combined with a double-talk detector.

11:10, SPTM-L3.6
ORTHONORMAL REALIZATION OF FAST FIXED-ORDER RLS ADAPTIVE FILTERS
R. MERCHED, A. SAYED
The existing derivations of fast RLS adaptive filters are dependent on the shift structure in the input regression vectors. This structure arises when a tapped-delay line (FIR) filter is used as a modeling filter. In this paper, we show, unlike what original derivations may suggest, that fast fixed-order RLS adaptive algorithms are not limited to FIR filter structures. We show that fast recursions in both explicit and array forms exist for more general data structures, such as orthonormally-based models. One of the benefits of working with an orthonormal basis is that fewer parameters can be used to model long impulse responses.