Least-Squares Fitting of Data with B-Spline Curves

Similar documents
Distance Between Point and Triangle in 3D

Perspective Mappings. Contents

Kochanek-Bartels Cubic Splines (TCB Splines)

GEOMETRIC TOOLS FOR COMPUTER GRAPHICS

Intersection of a Triangle and a Cone

1 Introduction 2. 2 A Simple Algorithm 2. 3 A Fast Algorithm 2

Intersection of Box and Ellipsoid

Minimum-Volume Box Containing a Set of Points

Natural Quartic Spline

Extraction of Level Sets from 2D Images

Sparse matrices, graphs, and tree elimination

5. Direct Methods for Solving Systems of Linear Equations. They are all over the place... and may have special needs

x = 12 x = 12 1x = 16

lecture 10: B-Splines

Matrix Operations with Applications in Computer Graphics

Tessellation of a Unit Sphere Starting with an Inscribed Convex Triangular Mesh

Curves and Surfaces 1

An introduction to interpolation and splines

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Algebra II Quadratic Functions

Intersection of Moving Sphere and Triangle

Intersection of an Oriented Box and a Cone

Outline. Parallel Algorithms for Linear Algebra. Number of Processors and Problem Size. Speedup and Efficiency

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a

Minimum-Area Rectangle Containing a Set of Points

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

CS 450 Numerical Analysis. Chapter 7: Interpolation

MA 323 Geometric Modelling Course Notes: Day 28 Data Fitting to Surfaces

Bayesian Background Estimation

(Sparse) Linear Solvers

COMPUTER OPTIMIZATION

Fall CSCI 420: Computer Graphics. 4.2 Splines. Hao Li.

LARP / 2018 ACK : 1. Linear Algebra and Its Applications - Gilbert Strang 2. Autar Kaw, Transforming Numerical Methods Education for STEM Graduates

Advanced Computer Graphics

COMP 558 lecture 19 Nov. 17, 2010

Supplemental 1.5. Objectives Interval Notation Increasing & Decreasing Functions Average Rate of Change Difference Quotient

Animating Transformations

Interactive Graphics. Lecture 9: Introduction to Spline Curves. Interactive Graphics Lecture 9: Slide 1

Parameterization. Michael S. Floater. November 10, 2011

CS 775: Advanced Computer Graphics. Lecture 3 : Kinematics

Lecture 9: Introduction to Spline Curves

Measuring Lengths The First Fundamental Form

MATH 890 HOMEWORK 2 DAVID MEREDITH

Dense Matrix Algorithms

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction

Chapter 2 Basic Structure of High-Dimensional Spaces

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

(Sparse) Linear Solvers

Advanced Computer Graphics

Interlude: Solving systems of Equations

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

Solving Sparse Linear Systems. Forward and backward substitution for solving lower or upper triangular systems

GRAPH CENTERS USED FOR STABILIZATION OF MATRIX FACTORIZATIONS

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction

CS184: Using Quaternions to Represent Rotation

Splines. Parameterization of a Curve. Curve Representations. Roller coaster. What Do We Need From Curves in Computer Graphics? Modeling Complex Shapes

Summer Review for Students Entering Pre-Calculus with Trigonometry. TI-84 Plus Graphing Calculator is required for this course.

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y.

Math 1113 Notes - Functions Revisited

Numerical Methods 5633

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB

Lecture 27: Fast Laplacian Solvers

3.1 INTRODUCTION TO THE FAMILY OF QUADRATIC FUNCTIONS

Quaternion Rotations AUI Course Denbigh Starkey

AH Matrices.notebook November 28, 2016

Sequential and Parallel Algorithms for Cholesky Factorization of Sparse Matrices

Humanoid Robotics. Least Squares. Maren Bennewitz

Algebra II: Strand 3. Quadratic Functions; Topic 2. Digging Deeper; Task 3.2.1

Lecture 2.2 Cubic Splines

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36

MA/CSSE 473 Day 20. Recap: Josephus Problem

Generalized Network Flow Programming

SCIE 4101, Spring Math Review Packet #4 Algebra II (Part 1) Notes

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Lecture 3: Camera Calibration, DLT, SVD

5.5 Completing the Square for the Vertex

CS Elementary Graph Algorithms & Transform-and-Conquer

Numerical Analysis I - Final Exam Matrikelnummer:

The Three Dimensional Coordinate System

Knowledge Discovery and Data Mining

UNIT 5 QUADRATIC FUNCTIONS Lesson 6: Analyzing Quadratic Functions Instruction

Summer Review for Students Entering Pre-Calculus with Trigonometry. TI-84 Plus Graphing Calculator is required for this course.

Parameterization of triangular meshes

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Until now we have worked with flat entities such as lines and flat polygons. Fit well with graphics hardware Mathematically simple

Sung-Eui Yoon ( 윤성의 )

Representing Curves Part II. Foley & Van Dam, Chapter 11

Answers to Homework 12: Systems of Linear Equations

Parametric Curves. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Computational Methods CMSC/AMSC/MAPL 460. Vectors, Matrices, Linear Systems, LU Decomposition, Ramani Duraiswami, Dept. of Computer Science

Section 3.1 Gaussian Elimination Method (GEM) Key terms

Functions of Several Variables

Solving Systems Using Row Operations 1 Name

Visual Recognition: Image Formation

Spline Curves. Spline Curves. Prof. Dr. Hans Hagen Algorithmic Geometry WS 2013/2014 1

(Refer Slide Time: 00:02:24 min)

Numerical Linear Algebra

PARDISO Version Reference Sheet Fortran

Transcription:

Least-Squares Fitting of Data with B-Spline Curves David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: December 2, 2005 Last Modified: February 1, 214 Contents 1 Definition of B-Spline Curves 2 2 Least-Squares Fitting 3 3 Implementation 5 1

This document describes how to fit a set of data points with a B-spline curve using a least-squares algorithm. The construction allows for any dimension for the data points. A typical application is to fit keyframes for animation sequences, whether the data is positional (3D) or rotational using quaternions (4D). In the latter case, the B-spline curve is not necessarily on the unit hypersphere in 4D, but the curve evaluations may be normalized to force the results onto that hypersphere. 1 Definition of B-Spline Curves A B-spline curve is defined for a collection of n + 1 control points {Q i } n i=0 by n X(t) = N i,d (t)q i (1) i=0 The control points can be any dimension, but all of the same dimension. The degree of the curve is d and must satisfy 1 d n. The functions N i,d (t) are the B-spline basis functions, which are defined recursively and require selection of a sequence of scalars t i for 0 i n + d + 1. The sequence is nondecreasing; that is, t i t i+1. Each t i is referred to as a knot, the total sequence a knot vector. The basis function that starts the recursive definition is 1, t i t < t i+1 N i,0 (t) = (2) 0, otherwise for 0 i n + d. The recursion itself is N i,j (t) = t t i N i,j 1 (t) + t i+j+1 t N i+1,j 1 (t) (3) t i+j t i t i+j+1 t i+1 for 1 j d and 0 i n + d j. The support of a function is the smallest closed interval on which the function has at least one nonzero value. The support of N i,0 (t) is clearly [t i, t i+1 ]. In general, the support of N i,j (t) is [t i, t i+j+1 ]. This fact means that locally the curve is influenced by only a small number of control points, a property called local control. The main classification of the knot vector is that it is either open or periodic. If open, the knots are either uniform or nonuniform. Periodic knot vectors have uniformly spaced knots. The use of the term open is perhaps a misnomer since you can construct a closed B-spline curve from an open knot vector. The standard way to construct a closed curve uses periodic knot vectors. Uniform knots are 0, 0 i d t i = i d n+1 d, d + 1 i n (4) 1, n + 1 i n + d + 1 Observe that in the special case d = n, when the number of control points is one more than the degree, the middle condition of Equation (4) for i cannot be true, so half the knots are 0 and the other half are 1. Periodic knots are t i = i d n + 1 d, 0 i n + d + 1 (5) Equations (2) and (3) allow you to recursively evaluate the B-spline curve, but there are faster ways based on the local control. This document does not cover the various evaluation schemes. You may find this topic in any book on B-splines and most likely at online sites. 2

2 Least-Squares Fitting The data points are {(s k, P k )} m, where s k are the sample times and P k are the sample data. The sample times are assumed to be increasing: s 0 < s 1 <... < s m. A B-spline curve that fits the data is parameterized by t [0, 1], so the sample times need to be mapped to the parameter domain by t k = (s k s 0 )/(s m s 0 ). The fitted B-spline curve is formally presented in equation (1), but the control points Q i are unknown quantities to be determined later. The control points are considered to be column vectors, and the collection of control points may be arranged into a single column vector ˆQ = Q 0 Q 1. Q n. (6) Similarly, the samples P k are considered to be column vectors, and the collection written as a single column vector ˆP = P 0 P 1. P m. (7) For a specifed set of control points, the least-squares error function between the B-spline curve and sample points is the scalar-valued function E( ˆQ) = 1 2 m n N j,d (t k )Q j P k j=0 The half term is just for convenience in the calculations. The quantity n j=0 N j,d(t k )Q j is the point on the B-spline curve at the scaled sample time t k. The term within the summation on the right-hand side of equation (8) measures the squared distance between the sample point and its corresponding curve point. The error function measures the total accumulation of squared distances. The hope is that we may choose the control points to make this error as small as possible. The minimization is a calculus problem. The function E is quadratic in the components of ˆQ, its graph a paraboloid (in high dimensional space), so it must have a global minimum that occurs when all its first-order partial derivatives are zero. That is, the vertex of the parabola occurs where the first derivatives are zero. The first-order partial derivatives are written in terms of the control points Q i rather than in terms of the components of the control points: 2 (8) E Q i ( n ) j=0 N j,d(t k )Q j P k N i,d (t k ) n j=0 N i,d(t k )N j,d (t k )Q j m N i,d(t k )P k n j=0 a kia kj Q j m a kip k (9) 3

where a rc = N c,d (t r ), and for 0 i n. Setting the partial derivatives equal to the zero vector leads to the system of equations m n m 0 = a ki a kj Q j a ki P k = A T A ˆQ A T ˆP (10) j=0 where A = [a rc ] is a matrix with m + 1 rows and n + 1 columns. The matrix A T is the transpose of A. This system of equations is in a familiar form of a least-squares problem. Recall that such problems arise when wanting to solve Ax = b. If the system does not have a solution, the next best thing is to construct x so that Ax b is as small as possible. The minimization leads to the linear system A T Ax = A T b. The matrix A T A is symmetric, a property that is desirable in the numerical solution of systems. Moreover, the matrix A is banded. This is a generalization of tridiagonal. A banded matrix has a diagonal with (potentially) nonzero entries. It has a contiguous set of upper bands and a contiguous set of lower bands, each band with (potentially) nonzero entries. All other entries in the matrix are zero. In our case, the number of upper bands and the number of lower bands are the same: d + 1 when n > d or d when n = d. The bandedness is a consequence of the local control for B-spline curves (the supports of the B-spline basis functions are bounded intervals). The direct approach to solving the equation (10) is to invert the coefficient matrix. The equation A T A ˆQ = A T ˆP implies ˆQ = ( A T A ) 1 A T ˆP = [ (A T A ) 1 A T ] ˆP = X ˆP (11) where the last equality defines the matrix X. The problem, though, is that the matrix inversion can be ill conditioned because the matrix has eigenvalues that are nearly zero. The ill conditioning causes a Gaussian elimination, even with full pivoting, to have problems. For the application of fitting keyframe data, the keyframes tend to be sampled at evenly spaced time intervale. In this case, the ill conditioning is not an issue as long as you choose a B-spline curve with uniform knots. Regardless, an approach different from the direct inversion is called for, both to minimize the effects of ill conditioning and to take advantage of the bandedness of the matrix. Recall that Gaussian elimination to solve a linear system with an n n matrix is an O(n 3 ) algorithm. The solution to a linear system with a tridiagonal matrix is O(n). The same is true for a banded matrix with a small number of bands relative to the size of the matrix. The numerical method of choice for symmetric, banded matrix systems is the Cholesky decomposition. The book Matrix Computations by G. Golub and C. van Loan has an excellent discussion of the topic. The algorithm starts with a symmetric matrix and factors it into a lower-triangular matrix, G, times the transpose of that lower-triangular matrix, G T, which is necessarily upper triangular. In our case, the Cholesky decomposition is A T A = GG T (12) A numerically stable LU solver may be used first to invert G, then to invert G T. For the application of fitting keyframe data, the choice of uniform knots leads to good stability, but also required is to make certain that the number of control points is smaller than the number of samples by a half. This is essentially a Nyquist-frequency argument. If you have as many control points as samples, the B-spline curve can have large oscillations. The matrix X is computed as the solution to A T AX = A T using the Cholesky decomposition to invert the matrix A T A. The vector of control points, ˆQ, is then computed from Equation (11). 4

3 Implementation A sample application to illustrate the fit is in the folder GeometricTools/GTEngine/Samples/Mathematics/BSplineCurveFitter In the application, a spiral curve is generated as a polyline of 1000 sample points. A B-spline curve is created using the algorithm of this document. The initial degree is 3 and the initial number of control points is 500. The display shows the current degree and the current number of control points. It also shows the average distance error between the original samples and the resample B-spline curve points as well as showing the root-mean-square error for the distances. You can increase or decrease the degree and the number of control points. key action d decrease the degree by 1 D increase the degree by 1 s decrease the number of control points by 1 (s for small) S increase the number of control points by 1 (S for small) m decrease the number of control points by 10 (m for medium) M increase the number of control points by 10 (M for medium) l decrease the number of control points by 100 (l for large) L increase the number of control points by 100 (L for large) ESC terminate the application You may rotate the scene using the mouse and the built-in virtual trackball. Of interest is when the number of control points is between 20 and 30. This is where you will see big differences between the curves. Note that you get a very good fit with an extremely small number of control points. 5