Chapter 1 BACKGROUND

Size: px
Start display at page:

Download "Chapter 1 BACKGROUND"

Transcription

1 Chapter BACKGROUND. Introduction In many areas of mathematics and in applications of mathematics, it is often necessary to be able to infer information about some function using only a limited set of sample values of that function. To this end, various methods have been developed that estimate or approximate the sampled data so as to recover some of the characteristics of the underlying function. Invariably, the success of the process depends on the regularity of the underlying function being sampled, as does the computational efficiency of the process. Collectively, these techniques are referred to as approximation. The goal is to minimize the error between the sampled function and the approximating function. A special case of particular importance in approximation is interpolation, in which the interpolating function or interpolant is required to exactly match the sampled function on the sample set. That is, given the function f sampled at ξ, and an interpolant s, it is required that f (ξ ) = s(ξ ) (.) for all ξ in the sample. In general, approximation methods do not have to satisfy this condition, but instead typically satisfy a constraint condition such as f s < ε, (.2) for some appropriate norm, as the number of sample points gets sufficiently large. Each of these two methods has advantages, and the decision as to which is used often depends on the specific application for which this reconstruction is intended. For instance, when only a general representation of the sampled function is necessary, some form of approximation such as best uniform approximation [] or best mean square fit [] may be warranted. The method of least squares for the linear case, commonly referred to as the best fit line, is a simple example of this. Interpolation methods have the obvious benefit of exactly representing the underlying function at the sample points ξ. Historically, interpolation, as a technique for evaluating analytic functions, was used in the case of look-up tables. By considering the lines between each of the points in the table, other values of the function can be generated with only a few simple calculations. As long as enough sample

2 2 values are stored, other approximate values can be generated so as to improve the accuracy of the estimates for values lying between tabulated values. More sophisticated methods of interpolation involve the use of higher degree polynomials or trigonometric polynomials. In spline interpolation, polynomials are used to estimate function values between the sample values in a piecewise manner, and care must be given to assure the requisite continuity, and as necessary, the differentiability of the spline across the sample points or nodes. Using a single polynomial to interpolate the sampled function eliminates this issue, but can create other problems due to the oscillation of the interpolant between the interpolating points. Interpolation involves using a linear combination of functions referred to as a set of basis functions, e.g., for the basis B{φ i } n i=, where the λ j are to be determined. f (x) n λ j φ(x j ), (.3) j= Although any collection of linearly independent functions can be used in this fashion, some have properties that make them particularly useful. The issue is what basis is most effective, and how can it be computed efficiently. In this paper, we examine a number of different basis functions in order to demonstrate the merits of each, as well as to examine their relations to one another. In particular, radial basis functions and the Bernstein functions will be analyzed in depth, and their suitability to interpolate data will be assessed. Other types of basis functions, including polynomial and trigonometric, will also be discussed in order to provide a framework in which to view the primary subjects of study..2 Introduction to Radial Basis Functions When interpolating data sampled from some underlying function, it has historically been common to use a polynomial basis because of the simplicity of working with polynomials of low degree functions in order to reduce the computational cost. Radial basis function (RBF) interpolation represents a method, which has recently become important []. These have the form φ ( x ξ ), ξ Ξ, (.4) where Ξ is the set of data points on which we are interpolating (the sample set). Thus, a linear combination of these will be of the form s(x) = λ ξ φ ξ ( x ξ ), (.5) ξ Ξ

3 3 where the λ ξ s are real values to be determined, and φ ξ are the basis functions associated with ξ. There are a number of properties that are readily apparent. The first, from which the name RBF is derived, is that each basis function depends only on the distance of x from ξ, not on the particular value of x. Since all values of x that are equidistant from ξ will have the same value for φ ( x ξ ), ξ is usually referred to as the center of φ. In the two dimensional case, this is referred to as symmetry about the line x = ξ. In order to simplify notation, we may sometimes use the notation r = x ξ. (.6) Notice that each element of the data sample is associated with an individual RBF. Therefore, there will be as many basis functions as there are data points. In order for functions of this type to be easily computed, each must be relatively simple. Typical examples include φ(r) = r 2 logr, thin-plate splines; (.7) φ(r) = r 2 + c 2, multiquadrics; and (.8) φ(r) = e αr2, Gaussian, (.9) where c and α are positive parameters. In this thesis, we examine the Gaussian basis function. In addition, we will also consider the error function, φ(x) = 2 x e t2 dt. (.0) π Finally, since the basis functions depend on the norm, x ξ, and not just the difference x ξ, the dimension of the sample space is of little importance. Although in this paper only one dimensional domains are considered, these functions can just as easily be extended to multi-dimensional domains..3 Radial Basis Function Interpolation In this construction we assume that the sample set Ξ is finite and contains n uniformly spaced elements ξ j on the interval [a,b]. The final form of the interpolating function consists of a linear combination of the basis functions φ j, each of which is weighted by a constant λ j : s(x) = λ φ ( x ξ ) + λ 2 φ 2 ( x ξ 2 ) λ n φ n ( x ξ n ). (.) 0

4 4 In order to insure that this linear combination interpolates the data, all that remains is to find the values of the weights λ j so that s(ξ i ) = f (ξ i ) for each ξ i in the sample set Ξ. By evaluating s(x) at each element of the sample set, the following n n system of equations is created, λ φ ( ξ ξ ) + λ 2 φ 2 ( ξ ξ 2 ) λ n φ n ( ξ ξ n ) = f (ξ ), (.2) λ φ ( ξ 2 ξ ) + λ 2 φ 2 ( ξ 2 ξ 2 ) λ n φ n ( ξ 2 ξ n ) = f (ξ 2 ),... λ φ ( ξ n ξ ) + λ 2 φ 2 ( ξ n ξ 2 ) λ n φ n ( ξ n ξ n ) = f (ξ n ), where each row consists of the interpolant evaluated at a different element of Ξ with the i-th row evaluated at ξ i. This system of equations can be written in matrix notation as Aλ = f, (.3) where A is the n n matrix with components a i j = φ j ( ξi ξ j ), and λ and f are the vectors with components λ j and f (ξ i ) respectively. Finding a solution to this system requires the invertibility of this matrix A, and Schoenberg [4] and Micchelli [2] provide the primary results demonstrating this for many of the basis functions considered in this thesis. Their results hinge on the monotonicity of the functions in the basis. Also, while it is theoretically possible to find λ by inverting A, the calculations required to do so often are prohibitively expensive, and difficult. These are important considerations for developing effective and efficient numerical interpolation algorithms based on RBFs. Consider an example in which a smooth well behaved function, in this case f (x) = cos(x), is sampled on a uniformly spaced grid on [ 5, 5]. Radial basis functions of the Gaussian variety, e.g., φ j (x) = e 2 (x ξ j) 2, (.4) are used to interpolate. Using a very small sample size to establish a base line for comparison, e.g., n = 3, the interpolant provides only an inaccurate representation of f (x), but it does provide a simple means of demonstrating the techniques used. Sampling [ 5, 5] uniformly yields Ξ = {3 5,0,5}. Since these points are also the centers of the radial basis functions, the interpolant is s(x) = λ φ ( x ( 5) ) + λ 2 φ 2 ( x 0 ) + λ 3 φ 3 ( x 5 ). (.5) Interpolation requires that s(ξ i ) = f (ξ i ) = cos(ξ i ) (.6)

5 5 2.5 Gaussian Interpolant cos(x) 0.5 y x Figure.: The function cos(x), along with the RBF interpolant generated by sampling at 3 uniformly spaced points on [ 5,5]. for each ξ i Ξ. Evaluating (.6) at each of the three sample points gives a system of three equations needed to find the three weights λ j, j =,2,3. Thus, λ φ ( 5 ( 5) ) + λ 2 φ 2 ( 5 0 ) + λ 3 φ 3 ( 5 5 ) = cos( 5), (.7) λ φ ( 0 ( 5) ) + λ 2 φ 2 ( 0 0 ) + λ 3 φ 3 ( 0 5 ) = cos(0), (.8) λ φ ( 5 ( 5) ) + λ 2 φ 2 ( 5 0 ) + λ 3 φ 3 ( 5 5 ) = cos(5), (.9) or in matrix notation, φ ( 0 ) φ 2 ( 5 ) φ 3 ( 0 ) φ ( 5 ) φ 2 ( 0 ) φ 3 ( 5 ) φ ( 0 ) φ 2 ( 5 ) φ 3 ( 0 ) This system is easily solved, resulting in: λ λ 2 λ 3 = cos( 5) cos(0) cos(5). (.20) λ = , (.2) λ 2 = , (.22) λ 3 = , (.23) and so the interpolating function is given by s(x) = e 2 (x+5) e 2 (x) e 2 (x 5)2. (.24) As expected, this does not provide a very accurate representation of the cosine function, however, by increasing the sample size to n = 30, a much better interpolant is obtained, as shown in Fig.. and Fig..2.

6 6 2.5 Gaussian Interpolant cos(x) 0.5 y x Figure.2: The function cos(x), along with the RBF interpolant generated by sampling at 30 uniformly spaced points on [ 5,5]..4 Convergence At this point, it seems reasonable to inquire about the accuracy of approximation obtained using these basis functions. The error between the interpolant and the original functions given by φ f, i.e. by b a φ(x) f (x) dx. (.25) Rather than work directly with (.25), it is more convenient to estimate this integral using a simplified quadrature, since for any integrable g, b a g(x)dx = lim m m i= g(x i ) x i, (.26) where {x i } m i= are points at which g is evaluated. On a uniform mesh we have and thus, b a x i = b a m, (.27) b a g(x)dx = lim m m m i= g(x i ). (.28) Thus, letting g(x) = f (x) φ(x), an estimate of the L error in interpolating f by φ can be found by calculating the right hand side of (.28) using a suitably dense set {x i } of the domain. Explicitly, E b a m m i= f (x i ) φ(x i ). (.29)

7 7 In practice, choosing an m that is comparatively larger than the sample size from which the interpolant was created is sufficient for smooth f. Using this method, with m = 000, the error in interpolating cos(x) on [ 5,5] with the Gaussian RBF is 0.53 when the sample size is n = 3, and when the sample size is n = 30. This is a quite a decrease considering how slightly the sample size was increased, and is consistent with the results in the literature regarding the rapid rates of convergence of RBFs [3]. The extent to which the error can be decreased, however, is limited. In this example, the error continues to decrease as the sample size n increases until n is approximately 30. This can be seen in Fig..3, which shows the error E as a function of the grid size h = 0/(n ). The function y = 0.03h 0.29 is also shown in order to illustrate the convergence rate of the interpolant for 29 n 6. Note that the slope of the error curve E is always greater than this line, thus this represents a lower bound on the rate of convergence. However, when 0 0. Error y = 0.03h 0.29 E e h Figure.3: The error between cos(x) and the Gaussian RBF (.9) interpolant as a function of the grid size. These values were generated using sample sizes from n = 3 to n = 50, in increments of one, of uniformly spaced points on [ 5,5]. The function y = 0.03h 0.29 was found by calculating the linear least square error for the data (log(h), log(e)), and is shown in order to illustrate the approximate convergence rate of the interpolant. the sample size is larger than 30, the error begins to rapidly increase. The cause of the divergence of E is due to ill conditioning of the matrix A, and prevents the interpolant from converging. Clearly this is a significant issue.

8 8.5 Ill Conditioning As the number of sample points used to generate the RBF interpolant increases, the rows of the matrix A can become increasingly similar. More accurately, some rows become numerically indistinguishable from a constant multiple of one or more other rows. This effect is known as ill-conditioning of the matrix. Because of finite precision, as the elements of the matrix become smaller and the rows become more similar, rounding errors begin to contribute to large errors in inverting the matrix. This results in a corruption of the results when attempting to solve Aλ = f. Geometrically, this effect is caused by a system of equations that represent nearly parallel lines, planes, or higher dimensional hyperplanes. Since the solution of such a system of equations is the set of values where the graphs intersect, it should be apparent how the solution can change drastically when one of the equations is perturbed even a slight amount. A simple example illustrates this. Consider the system of equations 499x 500y = 500, (.30) 999x + 000y = 2000, (.3) with the associated matrix form ( )( x y ) = ( ). (.32) The graph of this system is shown in Fig..4 where the nearly parallel nature of the lines can clearly be seen. The exact solution of this system is x = 000, y = 00. (.33) However, if we consider the same system with one of the coefficients slightly perturbed, say by replacing 999x with 000x in (.3), the exact solution becomes x = 500, y = 502. (.34) If the system being solved is computed analytically, and the accuracy of the coefficients can be assured, then this doesn t present a major obstacle to calculating the solutions of a system. For large systems however, this is impractical, and so solutions must be found numerically. To demonstrate the problems that can arise in doing so, we again examine the ill conditioned 2 2 matrix (.32). When this is solved by calculating ( x y ) = ( ) ( ) (.35)

9 x - 500y = x + 000y = y x Figure.4: The graph of the ill conditioned system consisting of the lines 499x 500y = 500 and 999x + 000y = The nearly parallel nature of the lines means that a small perturbation in either any of the coefficients can cause their point of intersection, and thus the solution, to change dramatically. numerically using 0 significant digits, the solution is found to be x = (.36) y = (.37) This is farther from the true solution than what was calculated after manually manipulating the coefficients. Thus, machine round off errors have a similar effect to that caused by changing the coefficients. In both cases, the system is perturbed by a small amount, and the result is a disproportionate, unacceptable error in the solution. By comparing the size of the matrix A to the size of its inverse, a parameter κ, known as the condition number, can be associated with A. This is given by κ = A A, (.38) where is any matrix norm subordinate to a vector norm. This defines a bound for the variation in the solution to a system relative to a change in the matrix. Specifically, if à is a small perturbation in the system Ax = b, and x is the solution to the perturbed system (A + Ã) x = b, then x x x ( ) à κ A, (.39) where the notation is used to indicate that this inequality only holds as a first order approximation. A detailed development of this parameter κ, along with a proof of the

10 0 E, κ e+6 e+4 e+2 e+0 e+08 e E, Error κ, Condition Number e h Figure.5: The error E in interpolating cos(x) using the Gaussian RBF interpolant (.9), along with the condition number κ of the associated interpolating matrix A. These values were generated using sample sizes of n = 3,4,...50, consisting of uniformly spaced points on [ 5,5], and with α = /2 as the scaling factor in the interpolant. above inequality, is given in []. Clearly, as demonstrated in (.39), κ is proportional to the largest possible error in solving the general interpolation equation. Thus, it is able to serve as an indicator for when ill conditioning errors will occur. This can be seen by revisiting the Gaussian RBF (.9) interpolant for cos(x). As seen in Fig..5, the condition number for this interpolating matrix A increases steadily as n increases (i.e., as h decreases). When the condition number finally becomes too large, the interpolating error stops decreasing and begins increasing as well. Since ill-conditioning is often caused by the occurrence of small elements in the matrix A, and the round off errors that occur when numerically manipulating these elements, it seems reasonable to consider the possibility that scaling the matrix by some constant before inverting it to avoid this problem. In fact, this method does prevent the matrix from becoming ill-conditioned, but at the cost of lowering the convergence rate of the interpolant. Thus, the number of sample points n necessary to obtain a suitable interpolant increases. We illustrate this effect by again examining the Gaussian RBF interpolation of cos(x) on a uniform grid of n points from [ 5,5]. In Fig..6, the error caused by using is plotted along with the error caused by using instead φ j (x) = e 2 (x ξ j) 2, (.40) φ j (x) = e 0(x ξ j) 2, (.4)

11 e+08 e+06 α = /2 α = 0 α = E e h Figure.6: The error between cos(x) and the Gaussian RBF (.9) interpolant on [ 5,5] as a function of the grid size h = 0/(n ), as n varies from 3 to 50. The difference between a scaling factor of α = /2, α = 0, and α = 000 in the basis functions is illustrated. Note that as α increases in the Gaussian RBFs, φ j (x) = e α(x ξ j) 2, the rate of convergence of the interpolant s(x) = n j= λ jφ j (x) to the function cos(x) decreases significantly. and φ j (x) = e 000(x ξ j) 2. (.42) Clearly, the interpolant which uses the scaling factor of 000 isn t affected by the illconditioning problems that the interpolants scaled by /2 and by 0 have, but its error is decreasing at an extremely slow rate. In fact, while scaling can be used to temporarily avoid the problem of ill-conditioning, the error inevitably begins to increase and the interpolant diverges from the original function. Note that not only is there a significant difference in convergence rates between the interpolants scaled by α = /2, α = 0, and α = 000 but also in the minimum error for this range of grid sizes, E min = for α = 2, (.43) E min = for α = 0, (.44) E min =.3764 for α = 000. (.45) RBFs with α as large as 000 becomes unusable as convergence is questionable. Indeed, as shown in Fig..6 these barely converge.

12 Chapter 2 2 STOCHASTIC INTERPOLATION 2. Introduction to Bernstein Functions Approximation using Bernstein polynomials has been of interest over a long history since these polynomials were first put forward. For a function f (x) sampled at uniformly spaced points ξ k = k/n, k = 0,...,n on [0,], the n-th Bernstein polynomial approximation is given by B n ( f ;x) = n k=0 ( ) n f (ξ k )x k ( x) n k. (2.) k Originally, these polynomials were presented by Sergei Natanovich Bernstein in 92 [2] as the cornerstone of his constructive proof of the Stone-Weierstrass approximation theorem, that every continuous function from a closed subset of R to R can be uniformly approximated by a polynomial function. These polynomials have a number of interesting properties. The most obvious, from the explanation above, is that they are uniformly convergent. Also, it is readily apparent that they are C, i.e., infinitely differentiable, since they are composed of linear combinations of a polynomial basis. Other interesting properties such as area preservation and monotonicity have also been studied. Unfortunately, there are intrinsic difficulties in using these approximants. Notably, these approximations have remarkably slow convergence rates, as low as O(h /2 ). In addition, they require the use of a uniform mesh. By generalizing the Bernstein polynomials, however, new approximating functions can be found that eliminate some of these difficulties. One particular extension which results in uniform convergence, even at the endpoints of the domain, can be found by replacing the binomial distribution in the Bernstein polynomials with the normal probability density function. The resulting functions are conveniently referred to as Bernstein functions, and have the form [ ( ) ( )] n y K n ( f,α;x) = k z k+ x erf k=0 2 2 z k x erf α/n 2, (2.2) α/n where the error function used in constructing these functions is given by erf(x) = 2 x e t2 dt. (2.3) π 0

13 3 In (2.2), we have the Bernstein Function for a function f (ξ k ) = y k with the adjustable parameter α. The z k terms are the midpoints of the cell boundaries, that is, z k = (ξ k+ + ξ k )/2 where the function f is sampled on ξ k [0,], k = 0,...,n, and z 0 = and z n+ =. Note that ξ k need not be uniformly spaced, unlike in (2.). For functions defined on domains other than [0,], it is a simple matter to map the domains so that they correspond to the unit interval. As is typical among various approximation techniques, the resulting approximant is composed of a linear combination of a particular set of basis function. However, the primary interest is in interpolation, not approximation. Interpolation using Bernstein Functions in its most elementary form is achieved as shown for interpolation using radial basis functions. Specifically, it is still based upon solving the equation Aλ = f (2.4) for λ, where λ and f are vectors with n + elements composed of the basis function coefficients λ j and f (ξ i ) respectively, and where the entries of the n + by n + matrix A are given by a i j = 2 [ erf ( z j+ ξ i 2 α/n ) ( )] z j ξ i erf 2. (2.5) α/n Notice that the dimension of the row and column space of the matrix A is one larger than in the case of RBF interpolation. This is because the Bernstein polynomials and Bernstein functions are indexed from 0 to n, corresponding to a sample size of n +. As in the case with the RBFs interpolation is accomplished by constructing the interpolant using s(x) = n i=0 λ i K n ( f,α;x), (2.6) and as is evident from (2.6) the K n are the basis functions for the interpolation scheme. Having observed this similarity between the two approaches, it is worthwhile examining a reformulation of Bernstein function interpolation that leads to a more general view of the interpolation process as the product of two steps that will be denoted as stochastic interpolation. If we examine the λ i, i.e., the coefficients that were obtained to achieve interpolation, we can re-interpret these as the pre-image of f, i.e., A f = λ. In a sense these are the values of the vector f that are mapped to f under the action of the mollifier, A. Thus f = Aλ = A ( A f ) (2.7) = f, (2.8)

14 4 and in the language of the linear algebra this trivial relationship, combined with 2.6 shows that we can equally well construct the interpolant s using s(x) = B m+,n+ A n+,n+ f. (2.9) In (2.9) the matrix B m+,n+ is constructed in the same manner as the matrix A n+,n+ was, i.e., using (2.5), except that now the points ξ i are selected not at the points at which the function f was sampled, but rather as the set of points {x i }, i = 0,...,m at which the interpolant s is evaluated. Then the entries of B are given by [ ( ) ( )] b i j = z j+ x i erf 2 2 z j x i erf α/n 2. (2.0) α/n The development of this interpolating technique is further detailed in Section Convergence of Bernstein Function Interpolants In contrast to the Bernstein polynomials, the associated Bernstein functions can be made to approximate the function being sampled quite well by adjusting the parameter α. Equally important, the associated interpolants are rapidly convergent. As illustrated in Fig. 2., the Bernstein function interpolant is indistinguishable from the cosine function on [ 5, 5] with a sample size as small as cos(x) y x Figure 2.: The Bernstein function interpolant K 4 (x) for cos(x) on [ 5,5] constructed using α = 3/4, and a sample of 5 uniformly spaced points. As in the case of RBF interpolation, it is important to determine just how well the Bernstein functions can perform when attempting to recover a function that has been sampled.

15 5 That is, how small can E be made, and at what rate does it approach its minimum value. We again examine this problem in regards to the interpolation of a smooth function, cos(x), on a uniform grid. In Fig. 2.2, the error in using the Bernstein functions for this interpolation in which the adjustable parameter α is chosen to be 3/4 is shown. Notice that there are two distinct convergence rates, as illustrated by the curves y = 0.004h 6.4 and y = 0.008h This characteristic is exhibited for a range of values for the parameter α. Also, the error begins to oscillate erratically for the smallest grid sizes shown in Fig. 2.2, presumably due to ill conditioning errors. e+08 e+06 y = 0.004h 6.4 y = 0.008h 2.28 α = 3/4 κ 0000 y e h Figure 2.2: The error in interpolating cos(x) using the Bernstein functions constructed using the parameter α = 3/4, along with the condition numbers of the associated interpolating matrices. Sample sizes from 3 to 00 are taken from [ 5,5], corresponding to the Bernstein function interpolants K 2 (x),k 3 (x),...,k99(x). The functions y = 0.004h 6.4 and y = 0.008h 2.28 are also shown in order to demonstrate the approximate rates at which the error is decreasing. As with RBF interpolation, optimizing the choice of the parameter α can aid in reducing the minimum error for a range of grid sizes. This can also improve the results by limiting the ill conditioning of the interpolation matrix. Care must be taken in varying α, as choosing too large a value can create ill conditioning errors similar to those observed for RBF interpolation. This is again caused by the rows of the interpolating matrix A becoming numerically indistinguishable as the values become increasingly smaller, however, values

16 6 of α that are too small can be similarly detrimental since for a fixed n, [ ( ) ( )] n lim K y n(x) = lim α 0 α 0 k z k+ x erf k=0 2 2 z k x erf α/n 2 α/n [ ( ) ( )] n y = k lim 2 erf z k+ x α 0 2 z k x lim erf α/n α 0 2 α/n = = k=0 n k=0 y k 2 (2.) (2.2) [ ( )] (2.3) n y k, (2.4) k=0 resulting in the piecewise constant interpolant. This is not a particularly useful result for most applications. Note that, unlike interpolation using RBFs, trends in convergence rates are not as easy to observe when varying the constant parameter α. Fig. 2.3 demonstrates that merely decreasing α does not cause a proportionate increase in convergence rate. Similarly, as seen in Fig. 2.4, increasing α does not cause a proportionate decrease in the condition numbers of the interpolating matrices. However, careful selection of the parameter can still produce useful interpolants. E α = / α = 3/4 α = α = 5/4 α = 3/2 e h Figure 2.3: The error in interpolating cos(x) using the Bernstein functions constructed using various constant values for the parameter α. Sample sizes from 3 to 00 are taken from [ 5,5], corresponding to the Bernstein function interpolants K 2 (x), K 3 (x),...,k99(x).

17 7 e+4 e+2 e+0 α = /2 α = 3/4 α = α = 5/4 α = 3/2 e+08 κ e h Figure 2.4: The condition numbers of the interpolating matrices for the Bernstein function interpolants on [ 5,5], using sample sizes from 3 to 00, and a variety of constant values for the parameter α corresponding to the results shown in Fig Relaxing the Choice of α By introducing a factor of /n into the scaling parameter, the correlation between the choice of α and the convergence rates can be more easily seen. This makes optimizing the choice of α significantly easier. In Fig. 2.5, the same values used in Fig. 2.3 and Fig. 2.4 are again used for the interpolation of cos(x), but with a factor of /n included in α. In a manner similar to that observed when interpolating using the Gaussian RBFs, when these /n scaled values are used in stochastic interpolation with Bernstein function bases, there is also a clear correlation between the size of the coefficient of α and the condition number of the interpolating matrices. This relationship is shown in Fig. 2.6). Here, however, increasing the coefficient of α results in higher condition numbers. For the largest coefficient shown in Fig. 2.3 and Fig. 2.4, α = 3/(2n), not only is the condition number the largest as well, but the error begins to oscillate erratically for increasingly small values of h. As with the case of using the constant value α = 3/4 shown in Fig. 2.2, each of these error curves shown in Fig. 2.5 has two distinct convergence rates: an initial, fast convergence rate when the h is fairly large, and a slower, more regular convergence rate as h becomes smaller. In Fig. 2.7, the error curves for α = /(2n) and α = 5/(4n) are shown, along with the curves illustrating the convergence rates. For the smaller value of α, the initial convergence rate is h 7.03, and increasing the coefficient of α results in an initial convergence rate of h The change in secondary convergence rates is even smaller, going from h.87 to

18 α = /(2n) α = 3/(4n) α = /(n) α = 5/(4n) α = 3/(2n) E e-05 e h Figure 2.5: The error in interpolating cos(x) using the Bernstein functions constructed using various constants scaled by a factor of /n for the parameter α. Uniform samples ranging in size from 3 to 00 are taken from [ 5,5]. e+07 e α = /(2n) α = 3/(4n) α = /(n) α = 5/(4n) α = 3/(2n) κ h Figure 2.6: The condition numbers of the interpolation matrices used in interpolating cos(x) using the Bernstein functions constructed with α values scaled by /n. These correspond to the interpolants whose error are shown in Fig h.67. In the example shown in Fig. 2.7, the convergence of the Bernstein function interpolant with α = /(2n) seems assured. The error is decreasing at a steady rate, h.87, and the condition number κ of the interpolating matrix given in Fig. 2.6 appears to converge to approximately 00. Indeed, by including a power of n in the scaling factor, it appears that the ill conditioning problem which arose earlier can be limited,but at the cost of a reduced con-

19 κ y=0.00h y=0.004h α = /(2n) y=0.006h y=0.004h.67 α = 5/(4n) e h Figure 2.7: The error in interpolating cos(x) using the Bernstein functions scaled by α = /(2n) and α = 5/(4n). The curves illustrating the two distinct convergence rates exhibited by each interpolant are also shown. Uniform samples ranging in size from 3 to 00 are taken from [ 5,5]. vergence rate. Thus, it should be possible to increase the sample size indefinitely, resulting in the ability to create as close of an approximation as wanted. Indeed, this conclusion is supported by Fig. 2.8, in which the largest sample size is increased to 500 from the 00 used previously. Again, the condition numbers associated with the interpolating matrices are remaining constant at approximately 00, and the error is decreasing steadily. 2.4 Bernstein Function Basis One of the benefits of interpolation using Bernstein function basis functions is that on a fixed grid, they can be used to repeatedly interpolate different functions with limited computational cost. Indeed, once the Bernstein function basis has been established, all that is required for interpolating a function is scalar multiplication. The method for creating the Bernstein function basis for a particular grid is a straightforward adaptation of the method used throughout this thesis for interpolation. This is best viewed as creating a convolution and de-convolution operator for the grid. Specifically, for a particular grid {ξ i } n i=0 [0,], constructing m + values y = {y j} m j=0 of the Bernstein interpolant for a set of data {(ξ i, f (ξ i ))} corresponds to convolving the vector f = ( f (ξ i )) with the weighted interpolating matrix B m+,n+ λ, whose components are given in (2.0), y = K n (f,α;x) = B m+,n+ λf. (2.5) Notice that this corresponds to m + solutions of (2.2). When m = n, this is merely the

20 20 y e-05 Error for α=/(2n) κ e h Figure 2.8: A demonstration of the ability of the Bernstein functions to avoid ill conditioning errors in interpolating cos(x) when the parameter α = /(2n) is used, resulting in uniform convergence. Also shown is the condition number associated with each of the interpolating matrices. Sample sizes from 3 to 500 are taken from [ 5, 5], corresponding to the Bernstein function interpolants K 2 (x), K 3 (x),...,k 499 (x). system Bλ = f that as we have already discussed, can be de-convolved to find λ. Thus, the Bernstein interpolant values {y i } m i=0 can be found by calculating y = B m+,n+ A n+,n+ f. (2.6) By rewriting the n + dimensional vector f in terms of the canonical basis {e i } n+ i=, where each e i is composed of the zero vector with a one in place of the i-th term, y = B m+,n+ A n+,n+ f (2.7) = B m+,n+ A n+ n+,n+ = = = n+ i= n+ i= i= f i e i (2.8) B m+,n+ A n+,n+ f i e i (2.9) ( ) f i B m+,n+ A n+,n+ e i (2.20) n+ f i ẽ i, (2.2) i= it is easy to see that this calculation can be done by multiplying the components of f by a set of m + -dimensional vectors: ẽ i = B m+,n+ A n+,n+ e i. (2.22)

21 2 y e e 2 e 3 e 4 e x Figure 2.9: The Bernstein function bases on [ 5, 5] which are calculated using a sample size of 5 uniformly spaced points, corresponding to a grid size of h = 2.5 and n = 4. The value /(2n) is used for the parameter α. These bases can be used to interpolate a given function f (x) using (2.2). Notice that it is in this step, the calculation of the basis, that the computationally costly matrix inversion occurs. However, this step does not depend on the function f to be interpolated. Thus, once these bases are established for a particular grid of n sample points, and m + output points, it is a trivial matter to interpolate sets of data {(ξ i, f (ξ i ))} using (2.2). In Fig. 2.9, the Bernstein function bases calculated using a uniform grid of 5 points on the interval [ 5,5] are shown. The parameter used to create them was α = 2n = 2(4) = 8. It is worth noting that at each of the sample points, Ξ = { 5, 2.5,0,2.5,5}, all of the bases have the value 0, except for one which has the value. This interesting property is simply a result of the data being interpolated. In particular, {, ẽ i (ξ j ) = e j (ξ ) = 0, if i = j if i j. (2.23) This of course holds for any grid chosen, as is evidenced by examine these interpolants on a nonuniform sample of [ 5, 5]. Fig. 2.0 illustrates the results of interpolating on the 5 points Ξ = { 4.850, ,0.530,2.620,3.24}, using again the parameter α = 8. Notice that the general shape of each of these bases ẽ i is maintained, with various scalings and translations occurring. It is emphasized that inversion implies only the steps necessary to solve the linear system discussed, and does not necessarily imply that the formal inverse of the system must be constructed.

22 22 y e e 2 e 3 e 4 e x Figure 2.0: The Bernstein function bases on [ 5, 5] which are calculated using a sample size of 5 randomly chosen points, corresponding to n = 4. The value /(2n) is used for the parameter α. At each of the sampled values, a vertical line is drawn from y = 0 to y = in order to better illustrate the values of each basis there. These bases can be used to interpolate a given function f (x) using (2.2). 2.5 Results of Stochasticity Although it may not be readily apparent, these figures (2.9) and (2.0) exhibit yet another interesting property of the Bernstein functions. Upon inspection, it is evident that the sum of these bases is constant, that is, ẽ + ẽ 2 + ẽ 3 + ẽ 4 + ẽ 5 =, (2.24) where, in keeping with the definition of ẽ i, is the m + dimensional vector composed of ones. This useful property is derived from the particular structure of the interpolating matrix A and the extended interpolating matrix B. The rows of each of these matrices are composed of a normal (Gaussian) probability distribution function. Thus, the matrices are row stochastic, meaning that the entries are non-negative, and the sum of the entries in each row is one. This can be used to show that an error free interpolant to any constant function can be created using these Bernstein function bases. To begin, we will demonstrate that the inverse of any invertible row stochastic matrix is row sum one, that is, the sum of the elements in each row is one. Suppose A n,n is an invertible row stochastic matrix. Then inversion of A n,n can be

23 23 accomplished by Gauss-Jordan elimination of the augmented matrix a a2 a n 0 0 a 2 a22 a 2n a n an2 a nn 0 0 (2.25) resulting in the new augmented system a a 2 a n a 2 a 22 a 2n.... a n a n2 a nn (2.26) Note that (2.25), the sum of each row on the left equals the sum of each row on the right. Therefore, when a multiple of any row is added to another row, the row sums on the left will still equal the row sums on the right. Thus, in the final augmented system, the row sums on the right must equal the row sums on the left, that is, they must all be one. Of course, there is no guarantee that the entries in the new matrix A = (a i j ), but as shown, the entries in each row must sum to one. Now consider the system used to interpolate a constant function f (x) = c on an arbitrary grid of n + points, with m + output points: y = B m+,n+ A n+,n+ c (2.27) where c i is the n+ dimensional vector [c,c,...,c] T, and y is the m+ dimensional vector composed of the output values of the interpolating function on m + arbitrary points. Then this is equivalent to ca + ca ca n ca 2 + ca ca 2n y = B m+,n ca n + ca n2 + + ca nn c(a 2 + a a n ) c(a 2 + a a 2n ) = B m+,n c(a n + a n2 + + a nn ) c() c() = B m+,n+.. c()

24 24 Since B m+,n+ is stochastic, and therefore row sum one as well, continuing to perform this calculation yields c y = c., (2.28) c as expected. This ability of the Bernstein functions to interpolate a constant function exactly is an interesting characteristic of stochastic interpolation methods, and avoids the introduction of spurious oscillations throughout the interval for smoothly varying functions. Note that the Bernstein function approximation also has this property; in contrast the Bernstein polynomial is known to be exact for constant and linear functions, and this would suggest that there is room for improvement in the construction of the Bernstein functions.

A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods

A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods Scott A. Sarra, Derek Sturgill Marshall University, Department of Mathematics, One John Marshall Drive, Huntington

More information

CS 450 Numerical Analysis. Chapter 7: Interpolation

CS 450 Numerical Analysis. Chapter 7: Interpolation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

APPM/MATH Problem Set 4 Solutions

APPM/MATH Problem Set 4 Solutions APPM/MATH 465 Problem Set 4 Solutions This assignment is due by 4pm on Wednesday, October 16th. You may either turn it in to me in class on Monday or in the box outside my office door (ECOT 35). Minimal

More information

Learning from Data Linear Parameter Models

Learning from Data Linear Parameter Models Learning from Data Linear Parameter Models Copyright David Barber 200-2004. Course lecturer: Amos Storkey a.storkey@ed.ac.uk Course page : http://www.anc.ed.ac.uk/ amos/lfd/ 2 chirps per sec 26 24 22 20

More information

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 Lecture 25: Bezier Subdivision And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 1. Divide and Conquer If we are going to build useful

More information

CS321 Introduction To Numerical Methods

CS321 Introduction To Numerical Methods CS3 Introduction To Numerical Methods Fuhua (Frank) Cheng Department of Computer Science University of Kentucky Lexington KY 456-46 - - Table of Contents Errors and Number Representations 3 Error Types

More information

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example). 98 CHAPTER 3. PROPERTIES OF CONVEX SETS: A GLIMPSE 3.2 Separation Theorems It seems intuitively rather obvious that if A and B are two nonempty disjoint convex sets in A 2, then there is a line, H, separating

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester. AM205: lecture 2 Luna and Gary will hold a Python tutorial on Wednesday in 60 Oxford Street, Room 330 Assignment 1 will be posted this week Chris will hold office hours on Thursday (1:30pm 3:30pm, Pierce

More information

MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves

MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves David L. Finn Over the next few days, we will be looking at extensions of Bezier

More information

Structured System Theory

Structured System Theory Appendix C Structured System Theory Linear systems are often studied from an algebraic perspective, based on the rank of certain matrices. While such tests are easy to derive from the mathematical model,

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

Approximating Square Roots

Approximating Square Roots Math 560 Fall 04 Approximating Square Roots Dallas Foster University of Utah u0809485 December 04 The history of approximating square roots is one of the oldest examples of numerical approximations found.

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Post-Processing Radial Basis Function Approximations: A Hybrid Method

Post-Processing Radial Basis Function Approximations: A Hybrid Method Post-Processing Radial Basis Function Approximations: A Hybrid Method Muhammad Shams Dept. of Mathematics UMass Dartmouth Dartmouth MA 02747 Email: mshams@umassd.edu August 4th 2011 Abstract With the use

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

An introduction to interpolation and splines

An introduction to interpolation and splines An introduction to interpolation and splines Kenneth H. Carpenter, EECE KSU November 22, 1999 revised November 20, 2001, April 24, 2002, April 14, 2004 1 Introduction Suppose one wishes to draw a curve

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information

Monotone Paths in Geometric Triangulations

Monotone Paths in Geometric Triangulations Monotone Paths in Geometric Triangulations Adrian Dumitrescu Ritankar Mandal Csaba D. Tóth November 19, 2017 Abstract (I) We prove that the (maximum) number of monotone paths in a geometric triangulation

More information

Computational Physics PHYS 420

Computational Physics PHYS 420 Computational Physics PHYS 420 Dr Richard H. Cyburt Assistant Professor of Physics My office: 402c in the Science Building My phone: (304) 384-6006 My email: rcyburt@concord.edu My webpage: www.concord.edu/rcyburt

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

Inverse and Implicit functions

Inverse and Implicit functions CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information

MA651 Topology. Lecture 4. Topological spaces 2

MA651 Topology. Lecture 4. Topological spaces 2 MA651 Topology. Lecture 4. Topological spaces 2 This text is based on the following books: Linear Algebra and Analysis by Marc Zamansky Topology by James Dugundgji Fundamental concepts of topology by Peter

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

Chapter 1. Math review. 1.1 Some sets

Chapter 1. Math review. 1.1 Some sets Chapter 1 Math review This book assumes that you understood precalculus when you took it. So you used to know how to do things like factoring polynomials, solving high school geometry problems, using trigonometric

More information

Using Arithmetic of Real Numbers to Explore Limits and Continuity

Using Arithmetic of Real Numbers to Explore Limits and Continuity Using Arithmetic of Real Numbers to Explore Limits and Continuity by Maria Terrell Cornell University Problem Let a =.898989... and b =.000000... (a) Find a + b. (b) Use your ideas about how to add a and

More information

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo

More information

Parameterization. Michael S. Floater. November 10, 2011

Parameterization. Michael S. Floater. November 10, 2011 Parameterization Michael S. Floater November 10, 2011 Triangular meshes are often used to represent surfaces, at least initially, one reason being that meshes are relatively easy to generate from point

More information

8 Piecewise Polynomial Interpolation

8 Piecewise Polynomial Interpolation Applied Math Notes by R. J. LeVeque 8 Piecewise Polynomial Interpolation 8. Pitfalls of high order interpolation Suppose we know the value of a function at several points on an interval and we wish to

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Eric Christiansen Michael Gorbach May 13, 2008 Abstract One of the drawbacks of standard reinforcement learning techniques is that

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB

Contents. Implementing the QR factorization The algebraic eigenvalue problem. Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

VW 1LQH :HHNV 7KH VWXGHQW LV H[SHFWHG WR

VW 1LQH :HHNV 7KH VWXGHQW LV H[SHFWHG WR PreAP Pre Calculus solve problems from physical situations using trigonometry, including the use of Law of Sines, Law of Cosines, and area formulas and incorporate radian measure where needed.[3e] What

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS480: Computer Graphics Curves and Surfaces Sung-Eui Yoon ( 윤성의 ) Course URL: http://jupiter.kaist.ac.kr/~sungeui/cg Today s Topics Surface representations Smooth curves Subdivision 2 Smooth Curves and

More information

Introduction to Immersion, Embedding, and the Whitney Embedding Theorems

Introduction to Immersion, Embedding, and the Whitney Embedding Theorems Introduction to Immersion, Embedding, and the Whitney Embedding Theorems Paul Rapoport November 23, 2015 Abstract We give an overview of immersion in order to present the idea of embedding, then discuss

More information

SECTION 1.3: BASIC GRAPHS and SYMMETRY

SECTION 1.3: BASIC GRAPHS and SYMMETRY (Section.3: Basic Graphs and Symmetry).3. SECTION.3: BASIC GRAPHS and SYMMETRY LEARNING OBJECTIVES Know how to graph basic functions. Organize categories of basic graphs and recognize common properties,

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Integration. Volume Estimation

Integration. Volume Estimation Monte Carlo Integration Lab Objective: Many important integrals cannot be evaluated symbolically because the integrand has no antiderivative. Traditional numerical integration techniques like Newton-Cotes

More information

Parameterization of triangular meshes

Parameterization of triangular meshes Parameterization of triangular meshes Michael S. Floater November 10, 2009 Triangular meshes are often used to represent surfaces, at least initially, one reason being that meshes are relatively easy to

More information

Space Filling Curves and Hierarchical Basis. Klaus Speer

Space Filling Curves and Hierarchical Basis. Klaus Speer Space Filling Curves and Hierarchical Basis Klaus Speer Abstract Real world phenomena can be best described using differential equations. After linearisation we have to deal with huge linear systems of

More information

Lecture VIII. Global Approximation Methods: I

Lecture VIII. Global Approximation Methods: I Lecture VIII Global Approximation Methods: I Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Global Methods p. 1 /29 Global function approximation Global methods: function

More information

Element Quality Metrics for Higher-Order Bernstein Bézier Elements

Element Quality Metrics for Higher-Order Bernstein Bézier Elements Element Quality Metrics for Higher-Order Bernstein Bézier Elements Luke Engvall and John A. Evans Abstract In this note, we review the interpolation theory for curvilinear finite elements originally derived

More information

Optimised corrections for finite-difference modelling in two dimensions

Optimised corrections for finite-difference modelling in two dimensions Optimized corrections for 2D FD modelling Optimised corrections for finite-difference modelling in two dimensions Peter M. Manning and Gary F. Margrave ABSTRACT Finite-difference two-dimensional correction

More information

A-SSE.1.1, A-SSE.1.2-

A-SSE.1.1, A-SSE.1.2- Putnam County Schools Curriculum Map Algebra 1 2016-2017 Module: 4 Quadratic and Exponential Functions Instructional Window: January 9-February 17 Assessment Window: February 20 March 3 MAFS Standards

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation International Journal of Mathematical Modelling & Computations Vol. 07, No. 03, Summer 2017, 299-307 Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation R. Firouzdor a and M.

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Generating Functions

Generating Functions 6.04/8.06J Mathematics for Computer Science Srini Devadas and Eric Lehman April 7, 005 Lecture Notes Generating Functions Generating functions are one of the most surprising, useful, and clever inventions

More information

NUMERICAL INTEGRATION

NUMERICAL INTEGRATION NUMERICAL INTEGRATION f(x) MISN-0-349 NUMERICAL INTEGRATION by Robert Ehrlich George Mason University 1. Numerical Integration Algorithms a. Introduction.............................................1 b.

More information

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation and the type of an object. Even simple scaling turns a

More information

Introduction to Computer Graphics. Modeling (3) April 27, 2017 Kenshi Takayama

Introduction to Computer Graphics. Modeling (3) April 27, 2017 Kenshi Takayama Introduction to Computer Graphics Modeling (3) April 27, 2017 Kenshi Takayama Solid modeling 2 Solid models Thin shapes represented by single polygons Unorientable Clear definition of inside & outside

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

An Investigation into Iterative Methods for Solving Elliptic PDE s Andrew M Brown Computer Science/Maths Session (2000/2001)

An Investigation into Iterative Methods for Solving Elliptic PDE s Andrew M Brown Computer Science/Maths Session (2000/2001) An Investigation into Iterative Methods for Solving Elliptic PDE s Andrew M Brown Computer Science/Maths Session (000/001) Summary The objectives of this project were as follows: 1) Investigate iterative

More information

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time NP and Latin Squares Instructor: Padraic Bartlett Lecture 4: 3SAT and Latin Squares Week 4 Mathcamp 2014 This talk s focus is on the computational complexity of completing partial Latin squares. Our first

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

A Comparative Study of LOWESS and RBF Approximations for Visualization

A Comparative Study of LOWESS and RBF Approximations for Visualization A Comparative Study of LOWESS and RBF Approximations for Visualization Michal Smolik, Vaclav Skala and Ondrej Nedved Faculty of Applied Sciences, University of West Bohemia, Univerzitni 8, CZ 364 Plzen,

More information

Open and Closed Sets

Open and Closed Sets Open and Closed Sets Definition: A subset S of a metric space (X, d) is open if it contains an open ball about each of its points i.e., if x S : ɛ > 0 : B(x, ɛ) S. (1) Theorem: (O1) and X are open sets.

More information

Calculus I Review Handout 1.3 Introduction to Calculus - Limits. by Kevin M. Chevalier

Calculus I Review Handout 1.3 Introduction to Calculus - Limits. by Kevin M. Chevalier Calculus I Review Handout 1.3 Introduction to Calculus - Limits by Kevin M. Chevalier We are now going to dive into Calculus I as we take a look at the it process. While precalculus covered more static

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a coordinate system and then the measuring of the point with

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Natural Quartic Spline

Natural Quartic Spline Natural Quartic Spline Rafael E Banchs INTRODUCTION This report describes the natural quartic spline algorithm developed for the enhanced solution of the Time Harmonic Field Electric Logging problem As

More information

Fourier transforms and convolution

Fourier transforms and convolution Fourier transforms and convolution (without the agonizing pain) CS/CME/BioE/Biophys/BMI 279 Oct. 26, 2017 Ron Dror 1 Why do we care? Fourier transforms Outline Writing functions as sums of sinusoids The

More information

5 The Theory of the Simplex Method

5 The Theory of the Simplex Method 5 The Theory of the Simplex Method Chapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory.

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

= f (a, b) + (hf x + kf y ) (a,b) +

= f (a, b) + (hf x + kf y ) (a,b) + Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals

More information

Four equations are necessary to evaluate these coefficients. Eqn

Four equations are necessary to evaluate these coefficients. Eqn 1.2 Splines 11 A spline function is a piecewise defined function with certain smoothness conditions [Cheney]. A wide variety of functions is potentially possible; polynomial functions are almost exclusively

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

CHAPTER 6 Parametric Spline Curves

CHAPTER 6 Parametric Spline Curves CHAPTER 6 Parametric Spline Curves When we introduced splines in Chapter 1 we focused on spline curves, or more precisely, vector valued spline functions. In Chapters 2 and 4 we then established the basic

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Understanding Gridfit

Understanding Gridfit Understanding Gridfit John R. D Errico Email: woodchips@rochester.rr.com December 28, 2006 1 Introduction GRIDFIT is a surface modeling tool, fitting a surface of the form z(x, y) to scattered (or regular)

More information

Almost Curvature Continuous Fitting of B-Spline Surfaces

Almost Curvature Continuous Fitting of B-Spline Surfaces Journal for Geometry and Graphics Volume 2 (1998), No. 1, 33 43 Almost Curvature Continuous Fitting of B-Spline Surfaces Márta Szilvási-Nagy Department of Geometry, Mathematical Institute, Technical University

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

What is Multigrid? They have been extended to solve a wide variety of other problems, linear and nonlinear.

What is Multigrid? They have been extended to solve a wide variety of other problems, linear and nonlinear. AMSC 600/CMSC 760 Fall 2007 Solution of Sparse Linear Systems Multigrid, Part 1 Dianne P. O Leary c 2006, 2007 What is Multigrid? Originally, multigrid algorithms were proposed as an iterative method to

More information

Year 9: Long term plan

Year 9: Long term plan Year 9: Long term plan Year 9: Long term plan Unit Hours Powerful procedures 7 Round and round 4 How to become an expert equation solver 6 Why scatter? 6 The construction site 7 Thinking proportionally

More information

Matrices. Chapter Matrix A Mathematical Definition Matrix Dimensions and Notation

Matrices. Chapter Matrix A Mathematical Definition Matrix Dimensions and Notation Chapter 7 Introduction to Matrices This chapter introduces the theory and application of matrices. It is divided into two main sections. Section 7.1 discusses some of the basic properties and operations

More information

(Refer Slide Time: 00:02:24 min)

(Refer Slide Time: 00:02:24 min) CAD / CAM Prof. Dr. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 9 Parametric Surfaces II So these days, we are discussing the subject

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

arxiv: v1 [physics.comp-ph] 25 Dec 2010

arxiv: v1 [physics.comp-ph] 25 Dec 2010 APS/123-QED Iteration Procedure for the N-Dimensional System of Linear Equations Avas V. Khugaev a arxiv:1012.5444v1 [physics.comp-ph] 25 Dec 2010 Bogoliubov Laboratory of Theoretical Physics, Joint Institute

More information

Filling Space with Random Line Segments

Filling Space with Random Line Segments Filling Space with Random Line Segments John Shier Abstract. The use of a nonintersecting random search algorithm with objects having zero width ("measure zero") is explored. The line length in the units

More information

Lecture 3: Some Strange Properties of Fractal Curves

Lecture 3: Some Strange Properties of Fractal Curves Lecture 3: Some Strange Properties of Fractal Curves I have been a stranger in a strange land. Exodus 2:22 1. Fractal Strangeness Fractals have a look and feel that is very different from ordinary curves.

More information

Floating-point representation

Floating-point representation Lecture 3-4: Floating-point representation and arithmetic Floating-point representation The notion of real numbers in mathematics is convenient for hand computations and formula manipulations. However,

More information

A.1 Numbers, Sets and Arithmetic

A.1 Numbers, Sets and Arithmetic 522 APPENDIX A. MATHEMATICS FOUNDATIONS A.1 Numbers, Sets and Arithmetic Numbers started as a conceptual way to quantify count objects. Later, numbers were used to measure quantities that were extensive,

More information

Interpolation & Polynomial Approximation. Cubic Spline Interpolation II

Interpolation & Polynomial Approximation. Cubic Spline Interpolation II Interpolation & Polynomial Approximation Cubic Spline Interpolation II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University

More information

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6 Math background 2D Geometric Transformations CS 4620 Lecture 6 Read: Chapter 2: Miscellaneous Math Chapter 5: Linear Algebra Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector

More information

Support Vector Machines

Support Vector Machines Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining

More information

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find). Section. Gaussian Elimination Our main focus in this section is on a detailed discussion of a method for solving systems of equations. In the last section, we saw that the general procedure for solving

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information