Company Events Academic Community Support Solutions Products & Services Contact NI MyNI

Least Mean Squares (LMS) Algorithms (Adaptive Filter Toolkit)

LabVIEW 2013 Adaptive Filter Toolkit Help

Edition Date: June 2013

Part Number: 372357B-01

»View Product Info
Download Help (Windows Only)

The least mean squares (LMS) algorithms adjust the filter coefficients to minimize the cost function. Compared to recursive least squares (RLS) algorithms, the LMS algorithms do not involve any matrix operations. Therefore, the LMS algorithms require fewer computational resources and memory than the RLS algorithms. The implementation of the LMS algorithms also is less complicated than the RLS algorithms. However, the eigenvalue spread of the input correlation matrix, or the correlation matrix of the input signal, might affect the convergence speed of the resulting adaptive filter.

Standard LMS

The standard LMS algorithm performs the following operations to update the coefficients of an adaptive filter:

  1. Calculates the output signal y(n) from the adaptive filter.
  2. Calculates the error signal e(n) by using the following equation: e(n) = d(n)–y(n).
  3. Updates the filter coefficients by using the following equation:

    where μ is the step size of the adaptive filter, is the filter coefficients vector, and is the filter input vector.

Use the AFT Create FIR LMS VI to create an adaptive filter with the standard LMS algorithm.

Normalized LMS

The normalized LMS (NLMS) algorithm is a modified form of the standard LMS algorithm. The NLMS algorithm updates the coefficients of an adaptive filter by using the following equation:

You also can rewrite the above equation to the following equation:

where . In the previous equation, the NLMS algorithm becomes the same as the standard LMS algorithm except that the NLMS algorithm has a time-varying step size μ(n). This step size can improve the convergence speed of the adaptive filter.

Use the AFT Create FIR Normalized LMS VI to create an adaptive filter with the NLMS algorithm.

Leaky LMS

The cost function of the leaky LMS algorithm is defined by the following equation:

where α is the leaky factor with a range of (0, 0.1). Because of the presence of α, the cost function of the leaky LMS algorithm is different from that of the standard LMS algorithm. The leaky LMS algorithm mitigates the coefficients overflow problem, because the cost function of this algorithm accounts for both E2(n) and the filter coefficients. The leaky LMS algorithm updates the coefficients of an adaptive filter by using the following equation:

If α = 0, the leaky LMS algorithm becomes the same as the standard LMS algorithm. A large leaky factor results in a large steady state error.

Use the AFT Create FIR LMS VI and specify an appropriate value for the leakage parameter to create an adaptive filter with the leaky LMS algorithm.

Normalized Leaky LMS

The normalized leaky LMS algorithm is a modified form of the leaky LMS algorithm. This algorithm updates the coefficients of an adaptive filter by using the following equation:

Use the AFT Create FIR Normalized LMS VI and specify an appropriate value for the leakage parameter to create an adaptive filter with the normalized leaky LMS algorithm.

Sign LMS

Some adaptive filter applications require you to implement adaptive filter algorithms on hardware targets, such as digital signal processing (DSP) devices, FPGA targets, and application-specific integrated circuits (ASICs). These targets require a simplified version of the standard LMS algorithm. The sign function, as defined by the following equation, can simplify the standard LMS algorithm.

Applying the sign function to the standard LMS algorithm returns the following three types of sign LMS algorithms.

  • Sign-error LMS algorithm—Applies the sign function to the error signal e(n). This algorithm updates the coefficients of an adaptive filter using the following equation: . Notice that when e(n) is zero, this algorithm does not involve multiplication operations. When e(n) is not zero, this algorithm involves only one multiplication operation.
  • Sign-data LMS algorithm—Applies the sign function to the input signal vector . This algorithm updates the coefficients of an adaptive filter using the following equation: . Notice that when is zero, this algorithm does not involve multiplication operations. When is not zero, this algorithm involves only one multiplication operation.
  • Sign-sign LMS algorithm—Applies the sign function to both e(n) and . This algorithm updates the coefficients of an adaptive filter using the following equation: . Notice that when either e(n) or is zero, this algorithm does not involve multiplication operations. When neither e(n) or is zero, this algorithm involves only one multiplication operation.

The sign LMS algorithms involve fewer multiplication operations than other algorithms. When the step size μ equals a power of 2, the sign LMS algorithms can replace the multiplication operations with shift operations. In this situation, these algorithms have only shift and addition operations. Compared to the standard LMS algorithm, the sign LMS algorithm has a slower convergence speed and a greater steady state error.

Use the AFT Create FIR Sign LMS VI to create an adaptive filter with the sign LMS algorithm.

Fast Block LMS

Some adaptive filter applications, such as adaptive echo cancellation and adaptive noise cancellation, require adaptive filters with a large filter length. If you apply the standard LMS algorithm to the adaptive filter, this algorithm might take a long time to complete the filtering and coefficients updating process. This length of time might cause problems in these applications because the adaptive filter must work in real time to filter the input signals. In this situation, you can use the fast block LMS algorithm.

The fast block LMS algorithm uses the fast Fourier transform (FFT) to transform the input signal x(n) to the frequency domain. This algorithm also updates the filter coefficients in the frequency domain. Updating the filter coefficients in the frequency domain can save computational resources. The fast block LMS algorithm differs from the standard LMS algorithm in the following ways:

  • The fast block LMS algorithm updates the coefficients of an adaptive filter block by block. The block size is exactly the same as the filter length. However, the standard LMS algorithm updates the filter coefficients sample by sample.
  • The fast block LMS algorithm requires fewer multiplications than the standard LMS algorithm. If both the filter length and block size are n, the standard LMS algorithm requires n(2n+1) multiplications, whereas the fast block LMS algorithm requires only (10nlog2n+26n) multiplications. If n = 1024, the fast block LMS algorithm can execute 16 times faster than the standard LMS algorithm.

The fast block LMS algorithm calculates the output signal and the error signal before updating the filter coefficients. The following diagram illustrates the steps that this algorithm completes to calculate these signals.

Symbol Meaning

Array in time domain

Array in frequency domain

In the previous figure, the fast block LMS algorithm completes the following steps to calculate the output and error signals.

  1. Concatenates the current input signal block to the previous blocks.
  2. Performs an FFT to transform the input signal blocks from the time domain to the frequency domain.
  3. Multiplies the input signal blocks by the filter coefficients vector .
  4. Performs an inverse FFT (IFFT) on the multiplication result.
  5. Retrieves the last block from the result as the output signal vector .
  6. Calculates the error signal vector by comparing the input signal vector with .

After calculating the output and error signals, the fast block LMS algorithm updates the filter coefficients. The following diagram shows the steps that this algorithm completes to update the filter coefficients.

Symbol Meaning

Scalar

The previous figure shows how the fast block LMS algorithm completes the following steps to update the filter coefficients.

  1. Inserts zeroes before the error signal vector . This step ensures the error signal vector has the same length as the concatenated input signal blocks.
  2. Performs an FFT on the error signal blocks.
  3. Multiplies the results by the complex conjugate of the FFT result of the input signal blocks.
  4. Performs an IFFT on the multiplication result.
  5. Sets the values of the last block of the IFFT result to zeroes and then performs an FFT on the IFFT result.
  6. Multiplies the step size μ by the FFT result.
  7. Adds the filter coefficients vector to the multiplication result. This step updates the filter coefficients.

Constrained and Unconstrained Implementations

You can implement the fast block LMS algorithm with two methods: constrained and unconstrained. The previous figure illustrates the constrained method. The signal-flow graph inside the dashed line is a gradient constraint. Refer to the book Adaptive Filter Theory for more information about gradient constraints. If you do not use the gradient constraint when you implement the fast block LMS algorithm, the implementation method becomes an unconstrained method. Compared to the constraint method, the unconstrained method saves one FFT and one IFFT. However, unconstrained adaptive filters have a lower convergence speed and a greater steady state error than constrained adaptive filters.

Use the AFT Create FIR Fast Block LMS VI to create an adaptive filter with the fast block LMS algorithm. To use the constrained method, set constrained? to TRUE.


 

Your Feedback! poor Poor  |  Excellent excellent   Yes No
 Document Quality? 
 Answered Your Question? 
Add Comments 1 2 3 4 5 submit