|Download Help (Windows Only)|
As the diagram of an adaptive filter shows, each adaptive filter consists of two parts: a linear filter and an adaptive algorithm. You can use linear filters with different filter types, such as finite impulse response (FIR) and infinite impulse response (IIR) filters. The LabVIEW Adaptive Filter Toolkit supports the FIR filter type only.
The following figure shows the diagram of an FIR adaptive filter.
|where||x(n) is the input signal to a linear filter at time n|
|y(n) is the corresponding output signal|
|d(n) is another input signal to the adaptive filter|
|e(n) is the error signal that denotes the difference between d(n) and y(n)|
|z–1 is a unit delay|
|wi(n) is the multiplicative gain. This multiplicative gain also is known as the filter coefficient|
|i is an integer with a value range of [0, n–1]|
The adaptive algorithm adjusts wi(n) iteratively to minimize the power of e(n).
The adaptive filter calculates the output signal y(n) by using the following equation:
|where||is the filter input vector.|
|is the filter coefficients vector.|
You can apply different algorithms to the FIR adaptive filter to control how the filter adjusts the coefficients. The adaptive algorithms adjust the filter coefficients to minimize the following cost function J(n):
J(n) = E[E2(n)]
where E[E2(n)] is the expectation of E2(n), and E2(n) is the square of the error signal at time n. Depending on how the adaptive filter algorithms calculate the cost function J(n), the Adaptive Filter Toolkit categorizes those algorithms into the following two groups:
where n is the filter length and λ is the forgetting factor. This algorithm calculates not only the instantaneous value E2(n) but also the past values, such as E2(n–1), E2(n–2), ..., E2(n–n+1). The value range of the forgetting factor is (0, 1]. When the forgetting factor is less than 1, this factor specifies that this algorithm places a larger weight on the current value and a smaller weight on the past values. The resulting E[E2(n)] of the RLS algorithms is more accurate than that of the LMS algorithms.
The LMS algorithms require fewer computational resources and memory than the RLS algorithms. However, the eigenvalue spread of the input correlation matrix, or the correlation matrix of the input signal, might affect the convergence speed of the resulting adaptive filter. The convergence speed of the RLS algorithms is much faster than that of the LMS algorithms. However, the RLS algorithms require more computational resources than the LMS algorithms.
The eigenvalue spread, defined by the following equation, is the ratio between the maximum and minimum eigenvalues of the input correlation matrix.
χ = λmax/λmin
where λmax and λmin are the maximum and minimum eigenvalue of the input correlation matrix, respectively. The input correlation matrix has dimensions of n×n, where n is the filter length. The input correlation matrix is defined by the following equation:
where is the filter input vector and E[x] is the mathematical expectation of x.
A large eigenvalue spread value of the input correlation matrix degrades the convergence of the resulting adaptive filter.