|Download Help (Windows Only)|
Compared to least mean squares (LMS) algorithms, recursive least squares (RLS) algorithms have a faster convergence speed and do not exhibit the eigenvalue spread problem. However, RLS algorithms involve more complicated mathematical operations and require more computational resources than LMS algorithms.
The standard RLS algorithm performs the following operations to update the coefficients of an adaptive filter.
where is the filter coefficients vector and is the gain vector. is defined by the following equation:
where λ is the forgetting factor and P(n) is the inverse correlation matrix of the input signal. Refer to the book Adaptive Filter Theory for more information about the inverse correlation matrix.
where δ is the regularization factor. The standard RLS algorithm uses the following equation to update this inverse correlation matrix.
Use the AFT Create FIR RLS VI to create an adaptive filter with the standard RLS algorithm.
Although the standard RLS algorithm creates adaptive filters with a fast convergence speed, this algorithm diverges when the inverse correlation matrix P(n) loses the properties of positive definiteness or Hermitian symmetry. The diverging of the standard RLS algorithm limits the application of this algorithm. The QR decomposition-based RLS (QR-RLS) algorithm can resolve this instability. Instead of working with the inverse correlation matrix of the input signal, the QR-RLS algorithm performs QR decomposition directly on the correlation matrix of the input signal. Therefore, this algorithm guarantees the property of positive definiteness and is more numerically stable than the standard RLS algorithm. However, the QR-RLS algorithm requires more computational resources than the standard RLS algorithm.
Use the AFT Create FIR QR-RLS VI to create an adaptive filter with the QR-RLS algorithm.