Adaptive Filters Algorithms (Part 2)

Similar documents
Numerical Robustness. The implementation of adaptive filtering algorithms on a digital computer, which inevitably operates using finite word-lengths,

Adaptive Filtering using Steepest Descent and LMS Algorithm

Algebraic Iterative Methods for Computed Tomography

CS6670: Computer Vision

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization

TRACKING PERFORMANCE OF THE MMAX CONJUGATE GRADIENT ALGORITHM. Bei Xie and Tamal Bose

Image Processing. Image Features

Agenda. Rotations. Camera models. Camera calibration. Homographies

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Algebraic Iterative Methods for Computed Tomography

Advanced Digital Signal Processing Adaptive Linear Prediction Filter (Using The RLS Algorithm)

Optimized Variable Step Size Normalized LMS Adaptive Algorithm for Echo Cancellation

Lecture 12: convergence. Derivative (one variable)

Adaptive Signal Processing in Time Domain

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Geometric camera models and calibration

Recovering dual-level rough surface parameters from simple lighting. Graphics Seminar, Fall 2010

AUDIO SIGNAL PROCESSING FOR NEXT- GENERATION MULTIMEDIA COMMUNI CATION SYSTEMS

Multichannel Recursive-Least-Squares Algorithms and Fast-Transversal-Filter Algorithms for Active Noise Control and Sound Reproduction Systems

PATTERN CLASSIFICATION AND SCENE ANALYSIS

Linear Methods for Regression and Shrinkage Methods

EE 701 ROBOT VISION. Segmentation

Humanoid Robotics. Least Squares. Maren Bennewitz

Computer Vision. Geometric Camera Calibration. Samer M Abdallah, PhD

Machine Learning / Jan 27, 2010

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

Analytical Approach for Numerical Accuracy Estimation of Fixed-Point Systems Based on Smooth Operations

Optimization and least squares. Prof. Noah Snavely CS1114

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

General Instructions. Questions

Parameter estimation. Christiano Gava Gabriele Bleser

Image Matching Fundamentals. Presented by: Dr. Hamid Ebadi

Two-view geometry Computer Vision Spring 2018, Lecture 10

SGN (4 cr) Chapter 11

.. Lecture 2. learning and regularization. from interpolation to approximation.

Experimental Data and Training

Advance Convergence Characteristic Based on Recycling Buffer Structure in Adaptive Transversal Filter

Image Processing. Filtering. Slide 1

CIS 580, Machine Perception, Spring 2014: Assignment 4 Due: Wednesday, April 10th, 10:30am (use turnin)

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Lecture 8 Object Descriptors

Optimization. Industrial AI Lab.

Camera calibration. Robotic vision. Ville Kyrki

Optimum Array Processing

Performance Analysis of Adaptive Filtering Algorithms for System Identification

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

Adaptive System Identification and Signal Processing Algorithms

Improved Decision-Directed Recursive Least Squares MIMO Channel Tracking

Linear Equation Systems Iterative Methods

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence

Lecture 11: E-M and MeanShift. CAP 5415 Fall 2007

Segmentation. Bottom Up Segmentation

Convex combination of adaptive filters for a variable tap-length LMS algorithm

Feature Tracking and Optical Flow

CS 4495 Computer Vision Motion and Optic Flow

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECH- NOLOGY ITERATIVE LINEAR SOLVERS

Steady-State MSE Performance Analysis of Mixture Approaches to Adaptive Filtering

Tracking Computer Vision Spring 2018, Lecture 24

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Introduction to Optimization Problems and Methods

Gradient Descent. Wed Sept 20th, James McInenrey Adapted from slides by Francisco J. R. Ruiz

REGULAR GRAPHS OF GIVEN GIRTH. Contents

Fast marching methods

Iterated Functions Systems and Fractal Coding

Image Registration Lecture 4: First Examples

6. Linear Discriminant Functions

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Recognition, SVD, and PCA

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

CS 534: Computer Vision Segmentation II Graph Cuts and Image Segmentation

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Web search before Google. (Taken from Page et al. (1999), The PageRank Citation Ranking: Bringing Order to the Web.)

Unit II-2. Orthogonal projection. Orthogonal projection. Orthogonal projection. the scalar is called the component of u along v. two vectors u,v are

Distributed Compressed Estimation Based on Compressive Sensing for Wireless Sensor Networks

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

CS 450 Numerical Analysis. Chapter 7: Interpolation

Robust Regression. Robust Data Mining Techniques By Boonyakorn Jantaranuson

REAL-TIME DIGITAL SIGNAL PROCESSING

A METHOD TO MODELIZE THE OVERALL STIFFNESS OF A BUILDING IN A STICK MODEL FITTED TO A 3D MODEL

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

2D vs. 3D Deformable Face Models: Representational Power, Construction, and Real-Time Fitting

Stabilization of the Euler deconvolution algorithm by means of a two steps regularization approach

Generalized trace ratio optimization and applications

Theoretical Concepts of Machine Learning

Cluster Analysis (b) Lijun Zhang

Convex Optimization CMU-10725

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

Image Coding with Active Appearance Models

Graphics Pipeline 2D Geometric Transformations

ECG782: Multidimensional Digital Signal Processing

CS 534: Computer Vision Segmentation and Perceptual Grouping

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Information Networks: PageRank

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

Agenda. Rotations. Camera calibration. Homography. Ransac

A few multilinear algebraic definitions I Inner product A, B = i,j,k a ijk b ijk Frobenius norm Contracted tensor products A F = A, A C = A, B 2,3 = A

Image processing and features

All MSEE students are required to take the following two core courses: Linear systems Probability and Random Processes

Visual Representations for Machine Learning

Transcription:

Adaptive Filters Algorithms (Part 2) Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Electrical Engineering and Information Technology Digital Signal Processing and System Theory Slide 1

Contents of the Lecture Today: Adaptive Algorithms: Introductory Remarks Recursive Least Squares (RLS) Algorithm Least Mean Square Algorithm (LMS Algorithm) Part 1 Least Mean Square Algorithm (LMS Algorithm) Part 2 Affine Projection Algorithm (AP Algorithm) Slide 2

Basics Part 1 Optimization criterion: Minimizing the mean square error Assumptions: Real, stationary random processes Structure: Unknown system Adaptive filter Slide 3

Derivation Part 2 What we have so far: Resolving it to leads to: With the introduction of a step size, the following adaptation rule can be formulated: Method according to Newton Slide 4

Derivation Part 3 Method according to Newton: Method of steepest descent: For practical approaches the expectation value is replaced by its instantaneous value. This leads to the so-called least mean square (LMS) algorithm: LMS algorithm Slide 5

Upper Bound for the Step Size A priori error: A posteriori error: Consequently: For large and input processes with zero mean the following approximation is valid: Slide 6

System Distance How LMS adaptation changes system distance: Target Old system distance New system distance Current system error vector Slide 7

Sign Algorithm Update rule: with Early algorithm with very low complexity (even used today in applications that operate at very high frequencies). It can be implemented without any multiplications (step size multiplication can be implemented as a bit shift). Slide 8

Analysis of the Mean Value Expectation of the filter coefficients: If the procedure converges, the coefficients reach stationary end values: So we have orthogonality: Wiener solution Slide 9

Convergence of the Expectations Part 1 Into the equation for the LMS algorithm we insert the equation for the error and get: Expectation of the filter coefficients: Slide 10

Convergence of the Expectations Part 2 Expectation of the filter coefficients: Independence assumption: Difference between means and expectations: Convergence of the means requires: Slide 11

Convergence of the Expectations Part 3 Recursion: Convergence requires the contraction of the matrix: = 0 because of Wiener solution Slide 12

Convergence of the Expectations Part 4 Convergence requires the contraction of the matrix (result from last slide): Case 1: White input signal Condition for the convergence of the mean values: For comparison condition for the convergence of the filter coefficients: Slide 13

Convergence of the Expectations Part 5 Case 2: Colored input assumptions Slide 14

Convergence of the Expectations Part 6 Putting the following results together, leads to the following notation for the autocorrelation matrix: Slide 15

Convergence of the Expectations Part 7 Recursion: Slide 16

Condition for Convergence Part 1 Previous result: Condition for the convergence of the expectations of the filter coefficients: Slide 17

Condition for Convergence Part 2 A (very rough) estimate for the largest eigenvalue: Consequently: Slide 18

Eigenvalues and Power Spectral Density Part 1 Relation between eigenvalues and power spectral density: Signal vector: Autocorrelation matrix: Fourier transform: Equation for eigenvalues: Eigenvalue: Slide 19

Eigenvalues and Power Spectral Density Part 2 Computing lower and upper bounds for the eigenvalues part 1: previous result exchanging the order of the sums and the integral and splitting the exponential term lower bound upper bound Slide 20

Eigenvalues and Power Spectral Density Part 3 Computing lower and upper bounds for the eigenvalues part 2: exchanging again the order of the sums and the integral solving the integral first inserting the result und using the orthonormality properties of eigenvectors Slide 21

Eigenvalues and Power Spectral Density Part 4 Computing lower and upper bounds for the eigenvalues part 2: exchanging again the order of the sums and the integral inserting the result from above to obtain the upper bound inserting the result from above to obtain the lower bound finally we get Slide 22

Geometrical Explanation of Convergence Part 1 Structure: Unknown system Adaptive filter System: System output: Slide 23

Geometrical Explanation of Convergence Part 2 Error signal: Difference vector: LMS algorithm: Slide 24

Geometrical Explanation of Convergence Part 3 The vector will be split into two components: It applies to parallel components: With: Slide 25

Geometrical Explanation of Convergence Part 4 Contraction of the system error vector: result obtained two slides before splitting the system error vector using and that is orthogonal to this results in Slide 26

NLMS Algorithm Part 1 LMS algorithm: Normalized LMS algorithm: Unknown system Adaptive filter Slide 27

NLMS Algorithm Part 2 Adaption (in general): A priori error: A posteriori error: A successful adaptation requires or: Slide 28

NLMS Algorithm Part 3 Convergence condition: Inserting the update equation: Condition: Ansatz: Slide 29

NLMS Algorithm Part 4 Condition: Ansatz: Step size requirement fo the NLMS algorithm (after a few lines ): or For comparison with LMS algorithm: Slide 30

NLMS Algorithm Part 5 Ansatz: Adaptation rule for the NLMS algorithm: Slide 31

Matlab-Demo: Speed of Convergence Slide 32

Convergence Examples Part 1 Setup: White noise: Slide 33

Convergence Examples Part 2 Setup: Colored noise: Slide 34

Convergence Examples Part 3 Setup: Speech: Slide 35

Contents of the Lecture Today Adaptive Algorithms: Introductory Remarks Recursive Least Squares (RLS) Algorithm Least Mean Square Algorithm (LMS Algorithm) Part 1 Least Mean Square Algorithm (LMS Algorithm) Part 2 Affine Projection Algorithm (AP Algorithm) Slide 36

Affine Projection Algorithm Basics Unknown system Signal vector: Filter vector: Filter output: Signal matrix: M describes the order of the procedure Slide 37

Affine Projection Algorithm Signal Matrix Definition of the signal matrix: Slide 38

Affine Projection Algorithm Error Vector Part 1 Signal matrix: Desired signal vector: Filter output vector: A priori error vector: Adaption rule: A posteriori error vector: Slide 39

Affine Projection Algorithm Error Vector Part 2 Requirement: Requirement: Slide 40

Affine Projection Algorithm Ansatz Requirement: Ansatz: Step-size condition: Slide 41

Affine Projection Algorithm Geometrical Interpretation NLMS algorithm AP algorithm Slide 42

Affine Projection Algorithm Regularization Non-regularised version of the AP algorithm: Regularised version of the AP algorithm: Slide 43

Affine Projection Algorithm Convergence of Different Algorithms Part 1 White noise: Slide 44

Affine Projection Algorithm Convergence of Different Algorithms Part 2 White noise: Slide 45

Affine Projection Algorithm Convergence of Different Algorithms Part 3 Colored noise Slide 46

Affine Projection Algorithm Convergence of Different Algorithms Part 4 Colored noise: Slide 47

Affine Projection Algorithm Convergence of Different Algorithms Part 5 Speech: Slide 48

Affine Projection Algorithm Convergence of Different Algorithms Part 6 Speech: Slide 49

Adaptive Filters Algorithms Summary and Outlook This week and last week: Introductory Remarks Recursive Least Squares (RLS) Algorithm Least Mean Square Algorithm (LMS Algorithm) Part 1 Least Mean Square Algorithm (LMS Algorithm) Part 2 Affine Projection Algorithm (AP Algorithm) Next week: Control of Adaptive Filters Slide 50