Iterative Methods for Solving Linear Problems

Similar documents
Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECH- NOLOGY ITERATIVE LINEAR SOLVERS

ART Exhibit. Per Christian Hansen. Tommy Elfving Touraj Nikazad Hans Henrik B. Sørensen. joint work with, among others

Limited view X-ray CT for dimensional analysis

Camera calibration. Robotic vision. Ville Kyrki

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

(Sparse) Linear Solvers

6. Linear Discriminant Functions

Tutorial: Row Action Methods

Generic descent algorithm Generalization to multiple dimensions Problems of descent methods, possible improvements Fixes Local minima

Report of Linear Solver Implementation on GPU

Column-Action Methods in Image Reconstruction

HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER PROF. BRYANT PROF. KAYVON 15618: PARALLEL COMPUTER ARCHITECTURE

Multiple View Geometry in Computer Vision

Contents. I The Basic Framework for Stationary Problems 1

(Sparse) Linear Solvers

Uppsala University Department of Information technology. Hands-on 1: Ill-conditioning = x 2

Mixture Models and the EM Algorithm

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

CS 179 Lecture 16. Logistic Regression & Parallel SGD

Bindel, Spring 2015 Numerical Analysis (CS 4220) Figure 1: A blurred mystery photo taken at the Ithaca SPCA. Proj 2: Where Are My Glasses?

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

High Performance Computing: Tools and Applications

A hybrid tomography method for crosswell seismic inversion

IN transmission computed tomography (CT), standard scan

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y.

Classical Gradient Methods

Porting the NAS-NPB Conjugate Gradient Benchmark to CUDA. NVIDIA Corporation

2-9 Operations with Complex Numbers

Convex Optimization CMU-10725

Machine Learning Classifiers and Boosting

EFFICIENT SOLVER FOR LINEAR ALGEBRAIC EQUATIONS ON PARALLEL ARCHITECTURE USING MPI

Data dependent parameterization and covariance calculation for inversion of focusing operators

Constructing System Matrices for SPECT Simulations and Reconstructions

Matrices 4: use of MATLAB

10.6 Conjugate Gradient Methods in Multidimensions

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

Image Deconvolution.

SELECTIVE ALGEBRAIC MULTIGRID IN FOAM-EXTEND

Humanoid Robotics. Least Squares. Maren Bennewitz

Multiview Stereo COSC450. Lecture 8

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence

Artificial Intelligence for Robotics: A Brief Summary

3-D Tomographic Reconstruction

Optimization and least squares. Prof. Noah Snavely CS1114

TOMOGRAPHIC reconstruction problems are found in

The ART of tomography

Accelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method

Efficient 3D Gravity and Magnetic Modeling

[1] CURVE FITTING WITH EXCEL

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Advanced Techniques for Mobile Robotics Graph-based SLAM using Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Newton and Quasi-Newton Methods

Iterative regularization in intensity-modulated radiation therapy optimization. Carlsson, F. and Forsgren, A. Med. Phys. 33 (1), January 2006.

Multidimensional scaling

Modern Methods of Data Analysis - WS 07/08

LECTURE NOTES Non-Linear Programming

1 Exercise: 1-D heat conduction with finite elements

CoE4TN3 Image Processing. Wavelet and Multiresolution Processing. Image Pyramids. Image pyramids. Introduction. Multiresolution.

Project: Camera Rectification and Structure from Motion

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Super Matrix Solver-P-ICCG:

Reconstruction from Projections

Large-scale workflows for wave-equation based inversion in Julia

Due Date: 17:00, Monday, Nov 21, 2011 TA: Yunyue (Elita) Li Lab 7 - Madagascar. Johan Rudolph Thorbecke 1 ABSTRACT

Lesson 9 - Practice Problems

Cover Page. The handle holds various files of this Leiden University dissertation.

Fast marching methods

Fast Algorithm for Matrix-Vector Multiply of Asymmetric Multilevel Block-Toeplitz Matrices

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

16.410/413 Principles of Autonomy and Decision Making

Linear Regression: One-Dimensional Case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

General Instructions. Questions

AMS526: Numerical Analysis I (Numerical Linear Algebra)

A Brief Look at Optimization

PARALLEL OPTIMIZATION

ALGEBRA 2 W/ TRIGONOMETRY MIDTERM REVIEW

Project 2: Structure from Motion

Multivariate Numerical Optimization

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

Monte Carlo Method for Solving Inverse Problems of Radiation Transfer

Contents. 1 Introduction Background Organization Features... 7

Introduction to optimization methods and line search

Least Squares and SLAM Pose-SLAM

Reconstruction of Tomographic Images From Limited Projections Using TVcim-p Algorithm

SPECT reconstruction

Optimization. there will solely. any other methods presented can be. saved, and the. possibility. the behavior of. next point is to.

Model parametrization strategies for Newton-based acoustic full waveform

Sparse Matrices. This means that for increasing problem size the matrices become sparse and sparser. O. Rheinbach, TU Bergakademie Freiberg

Matlab- Command Window Operations, Scalars and Arrays

Two-view geometry Computer Vision Spring 2018, Lecture 10

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Data-Driven Modeling. Scientific Computation J. NATHAN KUTZ OXPORD. Methods for Complex Systems & Big Data

Numerical Linear Algebra

Introduction to Optimization Problems and Methods

3. Replace any row by the sum of that row and a constant multiple of any other row.

Transcription:

Iterative Methods for Solving Linear Problems When problems become too large (too many data points, too many model parameters), SVD and related approaches become impractical.

Iterative Methods for Solving Linear Problems When problems become too large (too many data points, too many model parameters), SVD and related approaches become impractical. Very popular alternatives utilize "iterative" methods to obtain approximate solutions.

Iterative Methods for Solving Linear Problems When problems become too large (too many data points, too many model parameters), SVD and related approaches become impractical. Very popular alternatives utilize "iterative" methods to obtain approximate solutions. In many cases for large problems, the G matrix will be sparse (many many zero elements), so strategies to take advantage of this characteristic are important.

Simple Example - Kaczmarz's algorithm

Simple Example - Kaczmarz's algorithm A class of techniques that attack the problem one row of G at a time. In 2D, each row of G can be thought of as defining a line given by G m. In 3D G m is a plane, and above that it's a "hyperplane."

Simple Example - Kaczmarz's algorithm A class of techniques that attack the problem one row of G at a time. In 2D, each row of G can be thought of as defining a line given by G m. In 3D G m is a plane, and above that it's a "hyperplane." Kaczmarz's algorithm starts with an initial guess (e.g., m 0 = 0), and then steps through each row of G moving to the point on the G i,. m hyperplane closest to the current estimate of m. Amazingly, if G m = d has a unique solution, this simple approach will converge!

ALGORITHM:

ALGORITHM:

FORMULA:

Simple Example - Kaczmarz's algorithm y y=1 y=x-1 x Figure 6.1: Kaczmarz s algorithm on a system of two equations.

MATLAB Examples

QUESTIONS?

ART and SIRT These two methods, Algebraic Reconstruction Technique and Simultaneous Iterative Reconstruction Technique, were developed for tomography applications. For simplest version of ART, take Kaczmarz's formula and replace all non-zero entries in G with 1's, assuming the length of the ray path in each cell is equal! What are the numerator and denominator then?

ART and SIRT These two methods, Algebraic Reconstruction Technique and Simultaneous Iterative Reconstruction Technique, were developed for tomography applications. For simplest version of ART, take Kaczmarz's formula and replace all non-zero entries in G with 1's, assuming the length of the ray path in each cell is equal! What are the numerator and denominator then? An estimate of the residual and the number of "hit" cells!

ART and SIRT A slight improvement to ART can be made by scaling things by the equations to reflect the fact that different ray paths have different lengths (and note error on page 147 - "cell to cell" should be "path to path"!). ART solutions tend to be noisy because the model is "jumping around" at every update.

ART and SIRT SIRT uses the same "update formula" but all model updates are computed before applying any of them to the model! The updates are averaged to obtain the model perturbation for that iteration step. As a result, SIRT results tend to be less noisy, due to the averaging effect.

QUESTIONS?

Conjugate Gradient and CGLS Goal - in N dimensions, get to the minimum in N steps!

Conjugate Gradient and CGLS Goal - in N dimensions, get to the minimum in N steps! Gradient method - go in the "downhill" direction (-g).

Conjugate Gradient and CGLS Goal - in N dimensions, get to the minimum in N steps! Gradient method - go in the "downhill" direction (-g). Inefficient in a similar way that Kaczmarz's method is inefficient - bounce back and forth.

Conjugate Gradient and CGLS Goal - in N dimensions, get to the minimum in N steps! Gradient method - go in the "downhill" direction (-g). Inefficient in a similar way that Kaczmarz's method is inefficient - bounce back and forth. CG - zoom directly to the minimum! How??

Conjugate Gradient In the figure, the green line is the optimal gradient (steepest descent) steps and the red line is the CG steps. Note that the second CG step can be thought of as a linear combination of the first two gradient steps. How can we find the right linear combination?

Conjugate Gradient For the system A x = b, vectors p i and p j are said to be mutually conjugate with respect to A if p i T A p j = 0 If we build the CG steps so that they satisfy this relationship, then it turns out that these steps do what we want!

Conjugate Gradient For the system A x = b, vectors p i and p j are said to be mutually conjugate with respect to A if p i T A p j = 0 If we build the CG steps so that they satisfy this relationship, then it turns out that these steps do what we want! Note - CG works on symmetric positive definite matrices A. What do we do for a general G?

Conjugate Gradient Least Squares Basically, we need to turn our general problem G m = d into a symmetric positive definite system so that we can solve it with the Conjugate Gradient method.

Conjugate Gradient Least Squares Basically, we need to turn our general problem G m = d into a symmetric positive definite system so that we can solve it with the Conjugate Gradient method. Two options: 1. Form the normal equations, G T G m = G T d 2. Solve a regularized system such as

QUESTIONS?

Conjugate Gradient Least Squares - Application to image deblurring

Conjugate Gradient Least Squares - Application to image deblurring

CGLS, 30 iterations, no explicit regularization

CGLS, 100 iterations, no explicit regularization

CGLS, 200 iterations, with explicit regularization

QUESTIONS?