Math 2B Linear Algebra Test 2 S13 Name Write all responses on separate paper. Show your work for credit.

Similar documents
Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Column and row space of a matrix

Math 355: Linear Algebra: Midterm 1 Colin Carroll June 25, 2011

CS6015 / LARP ACK : Linear Algebra and Its Applications - Gilbert Strang

Unit II-2. Orthogonal projection. Orthogonal projection. Orthogonal projection. the scalar is called the component of u along v. two vectors u,v are

1) Give a set-theoretic description of the given points as a subset W of R 3. a) The points on the plane x + y 2z = 0.

Math 2331 Linear Algebra

CHAPTER 5 SYSTEMS OF EQUATIONS. x y

3. Replace any row by the sum of that row and a constant multiple of any other row.

Solving Systems of Equations Using Matrices With the TI-83 or TI-84

Answers to practice questions for Midterm 1

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

MTH309 Linear Algebra: Maple Guide

Numerical Linear Algebra

Section 3.1 Gaussian Elimination Method (GEM) Key terms

Performing Matrix Operations on the TI-83/84

Vector Algebra Transformations. Lecture 4

MITOCW ocw f99-lec07_300k

For example, the system. 22 may be represented by the augmented matrix

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

Linear Equations in Linear Algebra

Math 308 Autumn 2016 MIDTERM /18/2016

Orthogonal complements

A Poorly Conditioned System. Matrix Form

Introduction to Homogeneous coordinates

MATH 5520 Basics of MATLAB

Revision Problems for Examination 2 in Algebra 1

Introduction to Geometric Algebra

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Honors Precalculus: Solving equations and inequalities graphically and algebraically. Page 1

Solving for the Unknown: Basic Operations & Trigonometry ID1050 Quantitative & Qualitative Reasoning

TIME 2014 Technology in Mathematics Education July 1 st -5 th 2014, Krems, Austria

MAT 003 Brian Killough s Instructor Notes Saint Leo University

Linear Algebra Part I - Linear Spaces

Hot X: Algebra Exposed

A GPU-based Approximate SVD Algorithm Blake Foster, Sridhar Mahadevan, Rui Wang

Lecture 13. Math 22 Summer 2017 July 17, 2017

MATH 3511 Basics of MATLAB

Equations and Problem Solving with Fractions. Variable. Expression. Equation. A variable is a letter used to represent a number.

MATLAB Lecture 1. Introduction to MATLAB

4. Simplicial Complexes and Simplicial Homology

CT5510: Computer Graphics. Transformation BOCHANG MOON

ft-uiowa-math2550 Assignment HW8fall14 due 10/23/2014 at 11:59pm CDT 3. (1 pt) local/library/ui/fall14/hw8 3.pg Given the matrix

(Creating Arrays & Matrices) Applied Linear Algebra in Geoscience Using MATLAB

MAT 3271: Selected Solutions to the Assignment 6

Mark Important Points in Margin. Significant Figures. Determine which digits in a number are significant.

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Euclidean Geometry. by Rolf Sulanke. Sept 18, version 5, January 30, 2010

,!7IA3C1-cjfcei!:t;K;k;K;k ISBN Graphing Calculator Reference Card. Addison-Wesley s. Basics. Created in conjuction with

The Singular Value Decomposition: Let A be any m n matrix. orthogonal matrices U, V and a diagonal matrix Σ such that A = UΣV T.

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Linear Methods for Regression and Shrinkage Methods

The Interpolating Polynomial

A Short Introduction to Projective Geometry

To Do. Outline. Translation. Homogeneous Coordinates. Foundations of Computer Graphics. Representation of Points (4-Vectors) Start doing HW 1

LECTURE 1 Basic definitions, the intersection poset and the characteristic polynomial

Math 340 Fall 2014, Victor Matveev. Binary system, round-off errors, loss of significance, and double precision accuracy.

Unit R Student Success Sheet (SSS) Trigonometric Identities Part 2 (section 5.4)

Vector: A series of scalars contained in a column or row. Dimensions: How many rows and columns a vector or matrix has.

The basics of rigidity

Exercise Set Decide whether each matrix below is an elementary matrix. (a) (b) (c) (d) Answer:

Pythagoras Theorem. Recall. Pythagoras theorem gives the relationship between the lengths of side in right angled triangles.

A Detailed Look into Forward and Inverse Kinematics

Parallel and perspective projections such as used in representing 3d images.

Euclidean Space. Definition 1 (Euclidean Space) A Euclidean space is a finite-dimensional vector space over the reals R, with an inner product,.

Matlab and Coordinate Systems

Technische Universität München Zentrum Mathematik

Grade 12 Revision Trigonometry (Compound Angles)

3.5 Equations of Lines and Planes

MATH 890 HOMEWORK 2 DAVID MEREDITH

1.4. Skills You Need: Working With Radicals. Investigate

Matlab and Coordinate Systems

Polar (BC Only) They are necessary to find the derivative of a polar curve in x- and y-coordinates. The derivative

Basic Matrix Manipulation with a TI-89/TI-92/Voyage 200

CPSC 340: Machine Learning and Data Mining. Kernel Trick Fall 2017

GEOMETRIC TOOLS FOR COMPUTER GRAPHICS

Objectives and Homework List

SCIE 4101, Spring Math Review Packet #4 Algebra II (Part 1) Notes

Classroom Tips and Techniques: Stepwise Solutions in Maple - Part 2 - Linear Algebra

Technische Universität München Zentrum Mathematik

Unit R Student Success Sheet (SSS) Trigonometric Identities Part 2 (section 5.4)

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

Mastery. PRECALCULUS Student Learning Targets

Automatically improving floating point code

Multiple View Geometry in Computer Vision

6B Quiz Review Learning Targets ,

hp calculators hp 39g+ & hp 39g/40g Using Matrices How are matrices stored? How do I solve a system of equations? Quick and easy roots of a polynomial

Discrete Mathematics SECOND EDITION OXFORD UNIVERSITY PRESS. Norman L. Biggs. Professor of Mathematics London School of Economics University of London

1.1 calculator viewing window find roots in your calculator 1.2 functions find domain and range (from a graph) may need to review interval notation

3D Geometry and Camera Calibration

Chapter 15 Introduction to Linear Programming

Module 7 Highlights. Mastered Reviewed. Sections ,

arxiv: v1 [math.co] 25 Sep 2015

Computer Graphics. Lecture 2. Doç. Dr. Mehmet Gokturk

Geometric Transformations

,- Dirk Edward VanTilburg

Recognition, SVD, and PCA

Transcription:

Math 2B Linear Algebra Test 2 S3 Name Write all responses on separate paper. Show your work for credit.. Construct a matrix whose a. null space consists of all combinations of (,3,3,) and (,2,,). b. Left null space consists of all combinations of (,2,,) and (,,,). 2. Construct a 4by4 matrix whose column space is the same as its null space. Or show that it s not possible. 3. Show that 3 3 4 6 4 2 5 Has infinitely many solutions and find them all. 3 4. Are there values for u and v so that 2 4 8 6 Has rank 2? 6 5. The cosine space contains all combinations of 2 3 4 Find a basis for the subspace with. 6. Discuss the four subspaces of associated with 2 3 3 3 7 6 and 7. If V is the space of vectors in satisfying 3 3 and 4 6 4 Write a basis for its orthogonal complement. 8. is the vector in C(A) nearest to a given vector b. If A has independent columns, what equation determines? What are all the vectors perpendicular to the error? What goes wrong if the columns of A are dependent? 9. Use matrix methods to find the fourth degree polynomial which is the best fit for the data: (,), (,3), (2,2), (3,5), (4,4),(5,7).. Use the Gram Schmidt process to find an orthonormal basis for the column space of 2 3 3 4 6 4. Let V be a finite-dimensional, nonzero vector space over a field ( could be either or ). An inner product on V is a function that takes each ordered pair (u, v) of elements of V to a number, and has the following properties: Positivity., for all ; Definiteness., if and only if ; Additivity,,, for all,, ;

SOLNS: Homogeneity,, for all and all, ; Conjugate Symmetry,, for all, ; Recall that every real number is its complex conjugate, so if, then the Conjugate Symmetry condition is automatic. Let, denote the real-valued continuous functions on [,]. Define an inner product by, a. Show that this is an inner product space. b. Apply the Gram-Schmidt method to get an orthonormal basis for subspace spanned by, sin,cos. Construct a matrix whose a. null space consists of all combinations of 3 2 ANS: We want to find a matrix such that 3 Of course you could also look at it this way: 3 3 2 3 3 3 2 ~ 2 2 shows that 3 2 is a matrix with rank 2 and the null space has dimension 2 422 including the two independent vectors, (,3,3,) and (,2,,). b. Left null space consists of all combinations of (,2,,) and (,,,). 2 SOLN: Basically the same deal. We want to find a matrix such that Of course you could also look at it this way: 2 c. 2 ~ shows that is a matrix with rank 2 and the null space has dimension 422 including the two independent vectors, (,2,,) and (,,,). Thus is a matrix whose left null space contains the specified vectors.

2. Construct a 4by4 matrix whose column space is the same as its null space. Or show that it s not possible. SOLN: If n = 4 and the dimensions of are the same as the dimensions of then they are both 2 dimensional spaces embedded in 4 dimensional spaces. The column space is a subspace of the output and the null space is a subspace of the input. So they are both essentially isomorphic to (as any 2-dimensional space is) but they are also both embedded in so the question is, can these be the same space embedded in? I had to resort to (enlightened) guess and check and I finally struck upon. The column space is spanned by the basis, and these are also two independent vectors in the null space. So that does it! 3. Show that 3 3 4 6 4 2 5 3 Has infinitely many solutions and find them all. SOLN: Start by getting reduced row-echelon form of the augmented matrix: 3 3 3 3 6 8 2 2 4 6 4 2 ~ 3 3 ~ 3 3 ~ 6 5 3 2 7 9 2 3 3 6 Now the nullspace matrix is and this also the special solution (according to 3 2 Strang.) Now a particular solution is given by and thus the general solution is 4. Are there values for u and v so that 2 6 3 2 4 8 6 Has rank 2? 6 SOLN: Assume and do a row operation to move the matrix toward reduced row-echelon form: 4/ 8/ 2 6/ 4/ 8/ 6/ 6 Clearly, no matter what v is, this will have rank 3. If u =, that s still true. Same thing for v. So basically, we want to reduce the number of pivots to 2, but there s no way to do that. So, no, there are no values for u and v so that the matrix has rank 2. 5. The cosine space contains all combinations of 2 3 4

Find a basis for the subspace with. SOLN: First let s be sure that is, in fact, a vector space. Clearly it s closed under vector addition and scalar multiplication. All the other axioms are satisfied essentially by inheritance from. Now let s see that the subset with is also a vector space. Clearly, again this space is closed under vector addition and scalar multiplication so that pretty much does it. Now if cos4 If, which means there are only 3 degrees of freedom, knowing the values of A, B, and C determines the value of. So we could write cos cos 2 cos 3 cos 4 cos cos 4 cos 2 cos 4 cos3cos4 Interestingly, you can use the sum to product identities to write this as sin sin sin sin 3 sin sin So you could take the basis as cos cos 4, cos 2 cos 4, cos 3 cos 4 or sin sin, sin sin 3, sin sin. 6. Discuss the four subspaces of associated with 2 3 3 and 3 7 6 2 2 3 2 2 3 3 2 2 SOLN: ~ ~ ~ so the rank is 3 3 7 6 3 and the null space has dimension. 2 3 3 The column space has basis,,, comprised of the columns of A that correspond to the 3 7 6 pivot elements of the rref form of A. 2 The null space is spanned by the vector. The row space is also 3-dimensional spanned by any of the first three vectors comprising the rows of a matrix row-equivalent to A (provided you haven t done some weird permutation.) The left null space hmmm, clearly it s one-dimensional. What vector spans the space? For that we 3 3 2 2 need to look at 2 3 7 ~ ~ ~ so the 3 6 2 3 left null space is spanned by (2,,-,). As to 2 2 2 7/4 /2 4 3 ~ ~ ~ 3 7 7 4 /2 4 /2 So this matrix has full rank, meaning n = m = r = 4 and the null spaces consist of just the zero vector.

7. If V is the space of vectors in satisfying 3 3 and 4 6 4 Write a basis for its orthogonal complement. SOLN: The trick here may be to rephrase the question in terms of our matrix language. 3 3 6 8 ~ 4 6 4 3 3 V is actually the null space of this matrix, so it s orthogonal complement is the row space, whose 3 4 basis can be taken as either, or,. 3 6 6 3 4 8 3 8. is the vector in C(A) nearest to a given vector b. If A has independent columns, what equation determines? What are all the vectors perpendicular to the error? What goes wrong if the columns of A are dependent? SOLN: is the error in approximating and is minimized when it orthogonal to the column space: and we can be sure that the inverse exists because the columns of A are independent. 9. Use matrix method to find the fourth degree polynomial which is the best fit for the data: (,), (,3), (2,2), (3,5), (4,4),(5,7). SOLN: Obviously we want to use the least square projection method described in the text. The general form of the equation is and the system of equations corresponding to the given data is described by the matrix equation 3 2 4 8 6 3 9 27 8 2 4 6 64 256 5 4 5 25 25 625 7 The system involves 6 equations in 5 unknowns, so it s overdetermined and may not have a solution. We could do row operations by hand or work the formula from #8 by hand, but it s a bit of a bear, so let s turn to Octave. But first, since Octave doesn t (or can t figure out how to) produce factions in exact form, how about a go at the TI-89, a workhorse of yesterday:

As these screenshots suggest,. How much fun is that? In Octave we have octave:5> B = [ ; ; 2 4 8 6; 3 9 27 8; 4 6 64 256; 5 25 25 625] B = 2 4 8 6 3 9 27 8 4 6 64 256 5 25 25 625 octave:6> inverse(b'*b) ans =.99632 -.88783.76389 -.25463.2833 -.88783 3.54387-2.384259 3.683642 -.347222.76389-2.384259 2.359375-3.84838.373264 -.25463 3.683642-3.84838.237 -.2528.2833 -.347222.373264 -.2528.253 Ok now for 3 On the TI89: This shows that the least square 4 th degree polynomial fit is In Octave, octave:7> xhat=inverse(b'*b)*b'*[ 3 2 5 4 7]' xhat =.236.382275 -.34722 -.648.2833 This is consistent with the TI89 since.2833,.648,.34722,.382275 and.236

3 Not a bad fit. The error is 2 2 4 8 6 5 3 9 27 8 4 4 6 64 256 7 5 25 25 625 This is easily computed on the TI89 or using Octave, octave:9> [ 3 2 5 4 7]' - B*xhat ans = -.232.658 -.236.236 -.658.232. Use the Gram-Scmidt method to find an orthonormal basis for the column space of 2 3 3 4 6 4

SOLN: Start with, Then form B by subtracting from b, its projection onto A: 2 2 3 4 2, but don t stop there, now subtract from the third column vector its projection onto A and its projection onto B: 2 3 6 2, and continue with D in the same fashion: 2 4 /5 2/5 2. Finally, 2/5 /5 We can check this with the TI89: 2 2 /7 2/35 3/35. 2/35 /7

Evidently, the QR factorization of A is 5 4 5 5 7 5 4 5 4 2 3 3 4 6 4 5 4 5 7 5 4 5 4 5 4 5 5 7 5 5 7 5 5 2 5 2 5 5 7 5 2 7 35 3 9 2 5 3 7 4 4 4 35 2 2 7 2 7 35 5 7 7 7 7. Let V be a finite-dimensional, nonzero vector space over a field ( could be either or ). An inner product on V is a function that takes each ordered pair (u, v) of elements of V to a number, and has the following properties: Positivity., for all ; Definiteness., if and only if ; Additivity,,, for all,, ; Homogeneity,, for all and all, ; Conjugate Symmetry,, for all, ; Recall that every real number is its complex conjugate, so if, then the Conjugate Symmetry condition is automatic. Let, denote the real-valued continuous functions on [,]. Define an inner product by,

a. Show that this is an inner product space. SOLN: Positivity:, since for all x. Definiteness:, Additivity:,,, follows because the integral of a sum is the sum of integrals. Homogeneity:,, you can factor a constant out of an integrand, which you can because of the distributive property for multiplication over addition. Conjugate Symmetry,, for all,. Soln: We re just working over the reals so v = and there are no worries. b. Apply the Gram-Schmidt method to get an orthonormal basis for subspace spanned by, sin,cos SOLN: A = a = ; sin cos sincos cos sin cos cos sin cos sin cos cossin 2 sin sincossin sin cos 2 2cos 2 sin cos cos I don t see a great simplification for this, so I guess that s about it? But need to check that these are orthogonal., sin cos sin cos Ruh roh.resort to Mathematica: Define,:

Then check that they re orthogonal: Integrate[f[x]*G[x],{x,,}] Integrate[f[x]*H[x],{x,,}] Integrate[G[x]*H[x],{x,,}] Hooray!