Download Help (Windows Only) 
The least mean squares (LMS) algorithms adjust the filter coefficients to minimize the cost function. Compared to recursive least squares (RLS) algorithms, the LMS algorithms do not involve any matrix operations. Therefore, the LMS algorithms require fewer computational resources and memory than the RLS algorithms. The implementation of the LMS algorithms also is less complicated than the RLS algorithms. However, the eigenvalue spread of the input correlation matrix, or the correlation matrix of the input signal, might affect the convergence speed of the resulting adaptive filter.
The standard LMS algorithm performs the following operations to update the coefficients of an adaptive filter:
Use the AFT Create FIR LMS VI to create an adaptive filter with the standard LMS algorithm.
The normalized LMS (NLMS) algorithm is a modified form of the standard LMS algorithm. The NLMS algorithm updates the coefficients of an adaptive filter by using the following equation:
You also can rewrite the above equation to the following equation:
where . In the previous equation, the NLMS algorithm becomes the same as the standard LMS algorithm except that the NLMS algorithm has a timevarying step size μ(n). This step size can improve the convergence speed of the adaptive filter.
Use the AFT Create FIR Normalized LMS VI to create an adaptive filter with the NLMS algorithm.
The cost function of the leaky LMS algorithm is defined by the following equation:
where α is the leaky factor with a range of (0, 0.1). Because of the presence of α, the cost function of the leaky LMS algorithm is different from that of the standard LMS algorithm. The leaky LMS algorithm mitigates the coefficients overflow problem, because the cost function of this algorithm accounts for both E^{2}(n) and the filter coefficients. The leaky LMS algorithm updates the coefficients of an adaptive filter by using the following equation:
If α = 0, the leaky LMS algorithm becomes the same as the standard LMS algorithm. A large leaky factor results in a large steady state error.
Use the AFT Create FIR LMS VI and specify an appropriate value for the leakage parameter to create an adaptive filter with the leaky LMS algorithm.
The normalized leaky LMS algorithm is a modified form of the leaky LMS algorithm. This algorithm updates the coefficients of an adaptive filter by using the following equation:
Use the AFT Create FIR Normalized LMS VI and specify an appropriate value for the leakage parameter to create an adaptive filter with the normalized leaky LMS algorithm.
Some adaptive filter applications require you to implement adaptive filter algorithms on hardware targets, such as digital signal processing (DSP) devices, FPGA targets, and applicationspecific integrated circuits (ASICs). These targets require a simplified version of the standard LMS algorithm. The sign function, as defined by the following equation, can simplify the standard LMS algorithm.
Applying the sign function to the standard LMS algorithm returns the following three types of sign LMS algorithms.
The sign LMS algorithms involve fewer multiplication operations than other algorithms. When the step size μ equals a power of 2, the sign LMS algorithms can replace the multiplication operations with shift operations. In this situation, these algorithms have only shift and addition operations. Compared to the standard LMS algorithm, the sign LMS algorithm has a slower convergence speed and a greater steady state error.
Use the AFT Create FIR Sign LMS VI to create an adaptive filter with the sign LMS algorithm.
Some adaptive filter applications, such as adaptive echo cancellation and adaptive noise cancellation, require adaptive filters with a large filter length. If you apply the standard LMS algorithm to the adaptive filter, this algorithm might take a long time to complete the filtering and coefficients updating process. This length of time might cause problems in these applications because the adaptive filter must work in real time to filter the input signals. In this situation, you can use the fast block LMS algorithm.
The fast block LMS algorithm uses the fast Fourier transform (FFT) to transform the input signal x(n) to the frequency domain. This algorithm also updates the filter coefficients in the frequency domain. Updating the filter coefficients in the frequency domain can save computational resources. The fast block LMS algorithm differs from the standard LMS algorithm in the following ways:
The fast block LMS algorithm calculates the output signal and the error signal before updating the filter coefficients. The following diagram illustrates the steps that this algorithm completes to calculate these signals.
Symbol  Meaning 

Array in time domain 

Array in frequency domain 
In the previous figure, the fast block LMS algorithm completes the following steps to calculate the output and error signals.
After calculating the output and error signals, the fast block LMS algorithm updates the filter coefficients. The following diagram shows the steps that this algorithm completes to update the filter coefficients.
Symbol  Meaning 

Scalar 
The previous figure shows how the fast block LMS algorithm completes the following steps to update the filter coefficients.
You can implement the fast block LMS algorithm with two methods: constrained and unconstrained. The previous figure illustrates the constrained method. The signalflow graph inside the dashed line is a gradient constraint. Refer to the book Adaptive Filter Theory for more information about gradient constraints. If you do not use the gradient constraint when you implement the fast block LMS algorithm, the implementation method becomes an unconstrained method. Compared to the constraint method, the unconstrained method saves one FFT and one IFFT. However, unconstrained adaptive filters have a lower convergence speed and a greater steady state error than constrained adaptive filters.
Use the AFT Create FIR Fast Block LMS VI to create an adaptive filter with the fast block LMS algorithm. To use the constrained method, set constrained? to TRUE.