Adaptive Filters Algorithms (Part 2) Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System Theory Slide 1
Contents of the Lecture Today: Adaptive Algorithms: Introductory Remarks Recursive Least Squares (RLS) Algorithm Least Mean Square Algorithm (LMS Algorithm) Part 1 Least Mean Square Algorithm (LMS Algorithm) Part 2 Affine Projection Algorithm (AP Algorithm) Slide 2
Basics Part 1 Optimization criterion: Minimizing the mean square error Assumptions: Real, stationary random processes Structure: Unknown system Adaptive filter Slide 3
Derivation Part 2 What we have so far: Resolving it to leads to: With the introduction of a step size, the following adaptation rule can be formulated: Method according to Newton Slide 4
Derivation Part 3 Method according to Newton: Method of steepest descent: For practical approaches the expectation value is replaced by its instantaneous value. This leads to the so-called least mean square (LMS) algorithm: LMS algorithm Slide 5
Upper Bound for the Step Size A priori error: A posteriori error: Consequently: For large and input processes with zero mean the following approximation is valid: Slide 6
System Distance How LMS adaptation changes system distance: Target Old system distance New system distance Current system error vector Slide 7
Sign Algorithm Update rule: with Early algorithm with very low complexity (even used today in applications that operate at very high frequencies). It can be implemented without any multiplications (step size multiplication can be implemented as a bit shift). Slide 8
Analysis of the Mean Value Expectation of the filter coefficients: If the procedure converges, the coefficients reach stationary end values: So we have orthogonality: Wiener solution Slide 9
Convergence of the Expectations Part 1 Into the equation for the LMS algorithm we insert the equation for the error and get: Expectation of the filter coefficients: Slide 10
Convergence of the Expectations Part 2 Expectation of the filter coefficients: Independence assumption: Difference between means and expectations: Convergence of the means requires: Slide 11
Convergence of the Expectations Part 3 Recursion: Convergence requires the contraction of the matrix: = 0 because of Wiener solution Slide 12
Convergence of the Expectations Part 4 Convergence requires the contraction of the matrix (result from last slide): Case 1: White input signal Condition for the convergence of the mean values: For comparison condition for the convergence of the filter coefficients: Slide 13
Convergence of the Expectations Part 5 Case 2: Colored input assumptions Slide 14
Convergence of the Expectations Part 6 Putting the following results together, leads to the following notation for the autocorrelation matrix: Slide 15
Convergence of the Expectations Part 7 Recursion: Slide 16
Condition for Convergence Part 1 Previous result: Condition for the convergence of the expectations of the filter coefficients: Slide 17
Condition for Convergence Part 2 A (very rough) estimate for the largest eigenvalue: Consequently: Slide 18
Eigenvalues and Power Spectral Density Part 1 Relation between eigenvalues and power spectral density: Signal vector: Autocorrelation matrix: Fourier transform: Equation for eigenvalues: Eigenvalue: Slide 19
Eigenvalues and Power Spectral Density Part 2 Computing lower and upper bounds for the eigenvalues part 1: previous result exchanging the order of the sums and the integral and splitting the exponential term lower bound upper bound Slide 20
Eigenvalues and Power Spectral Density Part 3 Computing lower and upper bounds for the eigenvalues part 2: exchanging again the order of the sums and the integral solving the integral first inserting the result und using the orthonormality properties of eigenvectors Slide 21
Eigenvalues and Power Spectral Density Part 4 Computing lower and upper bounds for the eigenvalues part 2: exchanging again the order of the sums and the integral inserting the result from above to obtain the upper bound inserting the result from above to obtain the lower bound finally we get Slide 22
Geometrical Explanation of Convergence Part 1 Structure: Unknown system Adaptive filter System: System output: Slide 23
Geometrical Explanation of Convergence Part 2 Error signal: Difference vector: LMS algorithm: Slide 24
Geometrical Explanation of Convergence Part 3 The vector will be split into two components: It applies to parallel components: With: Slide 25
Geometrical Explanation of Convergence Part 4 Contraction of the system error vector: result obtained two slides before splitting the system error vector using and that is orthogonal to this results in Slide 26
NLMS Algorithm Part 1 LMS algorithm: Normalized LMS algorithm: Unknown system Adaptive filter Slide 27
NLMS Algorithm Part 2 Adaption (in general): A priori error: A posteriori error: A successful adaptation requires or: Slide 28
NLMS Algorithm Part 3 Convergence condition: Inserting the update equation: Condition: Ansatz: Slide 29
NLMS Algorithm Part 4 Condition: Ansatz: Step size requirement fo the NLMS algorithm (after a few lines ): or For comparison with LMS algorithm: Slide 30
NLMS Algorithm Part 5 Ansatz: Adaptation rule for the NLMS algorithm: Slide 31
Matlab-Demo: Speed of Convergence Slide 32
Convergence Examples Part 1 Setup: White noise: Slide 33
Convergence Examples Part 2 Setup: Colored noise: Slide 34
Convergence Examples Part 3 Setup: Speech: Slide 35
Contents of the Lecture Today Adaptive Algorithms: Introductory Remarks Recursive Least Squares (RLS) Algorithm Least Mean Square Algorithm (LMS Algorithm) Part 1 Least Mean Square Algorithm (LMS Algorithm) Part 2 Affine Projection Algorithm (AP Algorithm) Slide 36
Affine Projection Algorithm Basics Unknown system Signal vector: Filter vector: Filter output: Signal matrix: M describes the order of the procedure Slide 37
Affine Projection Algorithm Signal Matrix Definition of the signal matrix: Slide 38
Affine Projection Algorithm Error Vector Part 1 Signal matrix: Desired signal vector: Filter output vector: A priori error vector: Adaption rule: A posteriori error vector: Slide 39
Affine Projection Algorithm Error Vector Part 2 Requirement: Requirement: Slide 40
Affine Projection Algorithm Ansatz Requirement: Ansatz: Step-size condition: Slide 41
Affine Projection Algorithm Geometrical Interpretation NLMS algorithm AP algorithm Slide 42
Affine Projection Algorithm Regularization Non-regularised version of the AP algorithm: Regularised version of the AP algorithm: Slide 43
Affine Projection Algorithm Convergence of Different Algorithms Part 1 White noise: Slide 44
Affine Projection Algorithm Convergence of Different Algorithms Part 2 White noise: Slide 45
Affine Projection Algorithm Convergence of Different Algorithms Part 3 Colored noise Slide 46
Affine Projection Algorithm Convergence of Different Algorithms Part 4 Colored noise: Slide 47
Affine Projection Algorithm Convergence of Different Algorithms Part 5 Speech: Slide 48
Affine Projection Algorithm Convergence of Different Algorithms Part 6 Speech: Slide 49
Adaptive Filters Algorithms Summary and Outlook This week and last week: Introductory Remarks Recursive Least Squares (RLS) Algorithm Least Mean Square Algorithm (LMS Algorithm) Part 1 Least Mean Square Algorithm (LMS Algorithm) Part 2 Affine Projection Algorithm (AP Algorithm) Next week: Control of Adaptive Filters Slide 50