|Download Help (Windows Only)|
When you use an adaptive filter to filter a signal, you can analyzing the performance of the adaptive filter, such as convergence speed, steady state error, and stability. Analyzing adaptive filter performance ensures that the adaptive filter meets the application requirements.
As shown in the diagram of an adaptive filter, adaptive filters attempt to minimize the power of the error signal by iteratively adjusting the filter coefficients. The process of minimizing the power of the error signal is known as convergence. A fast convergence indicates that the adaptive filter takes a short time to calculate the appropriate filter coefficients that minimize the power of the error signal. You must consider the convergence speed of an adaptive filter when you create and use the filter. You can evaluate the convergence speed of an adaptive filter by displaying the error signals or learning curves of the filter.
You can compare the convergence speeds of different adaptive filters by displaying the error signals. The following figure shows the relative convergence speeds between the standard LMS, fast block LMS, and QR decomposition-based recursive least squares (QR-RLS) algorithms. This example uses these three algorithms to filter the same signal.
The previous figure shows that different algorithms result in different convergence speeds.
In addition to the algorithm, the step size and filter length of an adaptive filter also affect the convergence speed. You must choose an appropriate step size and filter length to ensure that the converge speed of the adaptive filter satisfies the application requirements.
Refer to the Convergence Speed and Computation Time of Adaptive Filters VI in the examples\Adaptive Filters\Getting Started directory for an example that compares the convergence speeds of adaptive filters. This example uses the same filter length for the three algorithms and uses the same step size for the standard and fast block LMS algorithms.
In addition to displaying the amplitude of the error signal, you also can display the learning curve of an adaptive filter to evaluate the convergence speed. The learning curve is a plot consisting of the mean square error (MSE) of the error signal. You can calculate the MSE of the error signal with the following equation:
where T is the number of times you execute the filter and Ek(n) is the error signal e(n) at the kth trial. The following figure shows the learning curves of an adaptive filter with different numbers of trials.
In the previous figure, the MSE of the error signal converges to zero and becomes steady after approximately 400 iterations. This figure also shows that a large number of trials can decrease the variance of the MSE. However, more trials also require a longer computation time.
Refer to the Display the Learning Curve VI in the examples\Adaptive Filters\Getting Started\Display the Learning Curve directory for an example that calculates and displays the learning curve of an adaptive filter.
The learning curve of an adaptive filter gradually converges to zero and becomes steady at an MSE value of the error signal e(n). The difference between that MSE value and zero is known as the steady state error. An optimal adaptive filter typically has a small steady state error. You can minimize the steady state error by adjusting the step size and filter length of the adaptive filter.
An adaptive filter becomes stable after the power of the error signal converges to zero, as shown in the following figure:
In some cases, the power of the error signal does not converge to zero. Instead, the power of the error signal diverges, as shown in the following figure:
If the power of the error signal does not converge to zero, the adaptive filter is unstable. You can try reducing the step size to stabilize the adaptive filter.