Boundary-Based Interval Newton s Method. Интервальный метод Ньютона, основанный на границе

Similar documents
SYSTEMS OF NONLINEAR EQUATIONS

= f (a, b) + (hf x + kf y ) (a,b) +

Projection of a Smooth Space Curve:

CS 450 Numerical Analysis. Chapter 7: Interpolation

The Maximum Area of a Cyclic (n+2)-gon Theorem

5.5 Newton s Approximation Method

Theoretical Concepts of Machine Learning

On Unbounded Tolerable Solution Sets

APPLICATION OF ALGORITHMS FOR AUTOMATIC GENERATION OF HEXAHEDRAL FINITE ELEMENT MESHES

9.1 Bracketing and Bisection

Revisiting the Upper Bounding Process in a Safe Branch and Bound Algorithm

3.1 Maxima/Minima Values

Section 18-1: Graphical Representation of Linear Equations and Functions

8 Piecewise Polynomial Interpolation

3.1. Solution for white Gaussian noise

Reals 1. Floating-point numbers and their properties. Pitfalls of numeric computation. Horner's method. Bisection. Newton's method.

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

THE VIEWING TRANSFORMATION

CURVILINEAR MESH GENERATION IN 3D

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines

A Singular Example for the Averaged Mean Curvature Flow

Contents. Hilary Term. Summary of Numerical Analysis for this term. Sources of error in numerical calculation. Solving Problems

Determination of Inner and Outer Bounds of Reachable Sets

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

Lecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10

Rectangle Sums

Lecture VIII. Global Approximation Methods: I

Rasterization. COMP 575/770 Spring 2013

3 Interior Point Method

Dipartimento di Ingegneria Informatica, Automatica e Gestionale A. Ruberti, SAPIENZA, Università di Roma, via Ariosto, Roma, Italy.

Reliable Nonlinear Parameter Estimation Using Interval Analysis: Error-in-Variable Approach

Description Syntax Remarks and examples Conformability Diagnostics References Also see

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

Numerical Method in Optimization as a Multi-stage Decision Control System

Three Applications of Interval Analysis in Computer Graphics

Foundations of Computing

On the Number of Tilings of a Square by Rectangles

A C 2 Four-Point Subdivision Scheme with Fourth Order Accuracy and its Extensions

SHAPE APPROXIMATION OF PRINTED IMAGES IN VLSI DESIGN KHINE NYOLE HAN. B.S., University of Illinois at Urbana-Champaign, 2007 THESIS

Force-Control for the Automated Footwear Testing System

Determination of Free Surface in Steady-State Seepage through a Dam with Toe Drain by the Boundary Element Method

Function approximation using RBF network. 10 basis functions and 25 data points.

A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings

University of Ostrava. Fuzzy Transform of a Function on the Basis of Triangulation

Edge and local feature detection - 2. Importance of edge detection in computer vision

Numerical representation of the quality measures of triangles and. triangular meshes

Non-linear Global Optimization using Interval Arithmetic and Constraint Propagation

The Helly Number of the Prime-coordinate Point Set

College Pre Calculus A Period. Weekly Review Sheet # 1 Assigned: Monday, 9/9/2013 Due: Friday, 9/13/2013

Introduction to Digital Image Processing

Support Vector Machines

6. Dicretization methods 6.1 The purpose of discretization

Least-Squares Minimization Under Constraints EPFL Technical Report #

Line Arrangement. Chapter 6

2017 SOLUTIONS (PRELIMINARY VERSION)

Generalized subinterval selection criteria for interval global optimization

computational field which is always rectangular by construction.

A Short SVM (Support Vector Machine) Tutorial

Today s class. Roots of equation Finish up incremental search Open methods. Numerical Methods, Fall 2011 Lecture 5. Prof. Jinbo Bi CSE, UConn

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Support Vector Machines.

CSCI 4620/8626. Coordinate Reference Frames

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

Solution Methods Numerical Algorithms

A Cumulative Averaging Method for Piecewise Polynomial Approximation to Discrete Data

B.Stat / B.Math. Entrance Examination 2017

Moore Catholic High School Math Department

Diffuse Optical Tomography, Inverse Problems, and Optimization. Mary Katherine Huffman. Undergraduate Research Fall 2011 Spring 2012

4.5 VISIBLE SURFACE DETECTION METHODES

arxiv: v1 [cs.cv] 2 May 2016

9.1 Bracketing and Bisection

(Refer Slide Time: 00:02:00)

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

On sufficient conditions of the injectivity : development of a numerical test algorithm via interval analysis

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

B553 Lecture 12: Global Optimization

On Rainbow Cycles in Edge Colored Complete Graphs. S. Akbari, O. Etesami, H. Mahini, M. Mahmoody. Abstract

Voronoi Diagram. Xiao-Ming Fu

Graphics (Output) Primitives. Chapters 3 & 4

1.2 Numerical Solutions of Flow Problems

Support Vector Machines

A DH-parameter based condition for 3R orthogonal manipulators to have 4 distinct inverse kinematic solutions

Chap.12 Kernel methods [Book, Chap.7]

Hierarchical Representation of 2-D Shapes using Convex Polygons: a Contour-Based Approach

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

MATHEMATICAL ANALYSIS, MODELING AND OPTIMIZATION OF COMPLEX HEAT TRANSFER PROCESSES

Interval-Based Projection Method for Under-Constrained Numerical Systems

Solution of 2D Euler Equations and Application to Airfoil Design

Robust Ray Intersection with Interval Arithmetic

Symmetric Product Graphs

Modern Methods of Data Analysis - WS 07/08

Some Open Problems in Graph Theory and Computational Geometry

Mid-Year Report. Discontinuous Galerkin Euler Equation Solver. Friday, December 14, Andrey Andreyev. Advisor: Dr.

I How does the formulation (5) serve the purpose of the composite parameterization

2 Geometry Solutions

x n x n stepnumber k order r error constant C r+1 1/2 5/12 3/8 251/720 abs. stab. interval (α,0) /11-3/10

QUADRATIC UNIFORM B-SPLINE CURVE REFINEMENT

Mesh Generation for Aircraft Engines based on the Medial Axis

Lecture 18 Representation and description I. 2. Boundary descriptors

Transcription:

Interval Computations No 4, 1993 Boundary-Based Interval Newton s Method L. Simcik and P. Linz The boundary based method for approximating solutions to nonlinear systems of equations has a number of advantages over midpoint based algorithms such as Krawczyk s method and the Hansen-Sengupta method. Our research shows that for a certain class of problems the boundary-based method considerably reduces the need for bisection, which is a major source of difficulty for midpoint based methods. Интервальный метод Ньютона, основанный на границе Л. Симчик, П. Линц Метод аппроксимации решений нелинейных систем уравнений, основанный на границе, имеет многие преимущества перед методами, основанными на средней точке, такими как алгоритм Кравчика и метод Хансена- Сенгупты. Наши исследования показывают, что для некоторого класса задач метод, основанный на границе, значительно сокращает число необходимых делений пополам, которые являются основным источником трудностей для методов, основанных на средней точке. c L. Simcik, P. Linz, 1993

90 L. Simcik, P. Linz 1 Introduction 1.1 Problem statement We consider an n-dimensional nonlinear system: f 1 (x 1,x 2,...,x n ) = 0. (1) f n (x 1,x 2,...,x n ) = 0 where f : R n R n has an interval extension F : I n I n, with the corresponding interval Jacobian matrix J(X) : I n I n n. Furthermore, we restrict f(x) so that f C 1. Using self-validating methods, the basic problem is to bound the solution(s) to (1) starting with an arbitrary initial domain X 0 I n. 1.2 Krawczyk s method For the purpose of illustration, we will use a simplified version of Krawczyk s method leaving out some details. We have the general form of the iteration: X k+1 = X k K(X k ) K(X) = m(x) Yf ( m(x) ) + ( I YJ(X) )( X m(x) ). If X 0 is a safe starting region (for precise formulations, see, e.g., [5]) around a unique solution, then Krawczyk s method will converge quadratically. Most Newton-like methods behave in this manner, but unfortunately these methods lack a general theory that would allow a user to compute a priori a safe region X 0. For this reason, these schemes often rely upon some auxiliary self-validating globally convergent bounding method that typically has large storage requirements and a lower rate of convergence. In (2), Y represents either an inverse Jacobian matrix, or its approximation. When J(X) is singular, a bounding method (as mentioned above) must be used to split the domain into several sub-domain (thus splitting the problem into many sub-problems), and eliminate regions until a safe starting region is found. There are many variations of (2) that use the trade-off between the computational complexity of a single iteration step and convergence rate. (2)

Boundary-Based Interval Newton s Method 91 2 Boundary based projection Before describing the general method, the one-dimensional example will provide a good introduction to the concepts to follow. 2.1 One-dimensional example of a boundary-based method Consider (1) with n = 1 and a simple solution in X 0 = [a,b]. Let s denote F (X k ) = [ m (k) ] 1,m(k) 2. Without loss of generality, let f(x) be strictly positive at the endpoints of X 0 (other cases are similar). Then X k+1 = X k F ( X k) /m (k) 1 X k+1 = X k F ( X k ) /m (k) 2 (3) X k+1 = [ X k+1,x k+1 ]. This method converges quadratically to the solution in [a, b] despite the possibility that f (γ) = 0 for some γ [a,b] such that f(γ) 0 [6]. Consider the following equation: ( 1 f(x) = x 2 3 x2 + ) 2sinx 3/19 (4) which has a simple solution in [.1,1]. When (3) is applied with X 0 = [.1,1], the result is as follows: X k [.100000000000, 1.00000000000] [.117520751584,.720303378869] [.152064870794,.532597677896] [.215187624078,.430345925997] [.303082397292,.396393057590] [.371179468697,.392459492178] [.391177285026,.392379718777] [.392375589063,.392379507168] [.392379507094,.392379507138] [.392379507135,.392379507138]

92 L. Simcik, P. Linz 8 Region Elimination Using Ideal Newton Step 7 f(x)=0.01*(x-25)^2+sin(x)+1 6 5 4 3 2 1 0 0 5 10 15 20 25 30 35 40 45 50 which bounds the real solution. Figure 1: Region elimination If there were two distinct solutions inx 0 then the method would converge to the convex hull containing both of them. By splitting the resulting interval and applying the method again to each piece, the solutions can be isolated. Now consider the problem x f(x) = 1 100 (x2 50x+625)+sinx+1 (5) which has no real solutions on [0,50]. When (3) is applied to this problem, we get region elimination depicted in Figure 1. Figure 1 does not show the details of the region elimination at the function s global minimum, but the algorithm did eliminate the entire domain piece-by-piece (on the grounds that none of the eliminated regions could possibly contain a solution of the equation f(x) = 0).

Boundary-Based Interval Newton s Method 93 Experiments described in [6] showed that in general, for functions f of one variable, the number of iterations required for region elimination is O ( η log( 1 α )), with η being the number of extrema over [a,b], and α be the height of the extrema closest to the x-axis. Using (3) means using a pessimistic bound on f (x) near the endpoints, unless f(x) is monotonic over all of X 0. The monotonicity of the inclusion F (X) implies that for all X X 0 we have F (X) F (X 0 ). Thus using some X at the boundary of X 0 will provide a sharper bound on the range of slopes near that particular boundary. Since this is a self-validating method, for a given choice of X X 0 we can at most eliminate X as a region with no solutions. In [6], we proved that if a region X k contains a solution, then at each boundary, there exists the largest interval X that can be reliably eliminated in this manner. These largest intervals X at the left and right endpoints are called the ideal steps. They represent the largest portion of the region that can be eliminated at each end during one iteration if we use the boundary-based method. The program creating the data for Figure 1 used the ideal step boundarybased interval method, finding the ideal step at the left and right side of the interval during each iteration. The ideal steps can be found through a binary search or estimated by using an interval Taylor expansion of F (X). 2.2 Algorithm description The principal difference between an n-dimensional problem and the onedimensional case is that in order to eliminate a region in an n-dimensional space, an algorithm must find a boundary in I n where f(x) is guaranteed to be non-zero. The n-dimensional problem takes the form of (1) with an n-dimensional rectangle X 0. We define a face of an n-rectangle by fixing the value of one variable (say, x j ) either at its upper bound or at its lower bound (in both cases, x j will thus be a constant). Let us denote this face by X F. Thus, for each region X k, there are 2n faces for which we have to evaluate F i (X F ), i = 1...n. Note that we do not have to apply all n functions F i to all the faces. E.g., if for some i, the expression for F i does not contain x j at all, then we do not need to apply F i to the corresponding face. Indeed, in this case, the

94 L. Simcik, P. Linz range F i (X F ) of F i for this face coincides with the range F i (X k ) of F i for the entire region X k, and thus (unless the initial problem has no solutions at all) this range does contain 0. If after searching over all the faces, we find i such that 0 F i (X F ), then we can eliminate a region that is adjacent to this face by using F i X j (i.e., by using the corresponding entry in J(X)). This partial derivative evaluated over the entire region can be used to bound the behavior of F(X) in the X j direction. Thus, in the same manner as the one-dimensional case, we can eliminate a region that is adjacent to the boundaries. Similar to the one-dimensional case, we can now prove the existence of an ideal step at each face. To find an ideal step, we can either apply a binary search, or we can use an interval Taylor expansion. Since a region is eliminated in only one dimension at a time, we need only a one-dimensional expansion: e.g., if we restrict ourselves to quadratic terms only, then we only need to compute 2 F i, and the desired estimate for an ideal step follows from Xj 2 solving a quadratic equation. If 0 F(X F ) at some face X F, and there is no ideal step from this face, this means that the entire domain has no solutions (the proof of this fact is rather simple). This fact demonstrates that the boundary-based method with the ideal step is inherently a non-existence test. Algorithm pseudo-code: Start with n-dimensional vector X 0. 1. Create a stack of vectors denoting each face of X k, say X Fp, p = 1:2n. 2. For each entry of the interval Jacobian matrix that is not identically zero, check the corresponding two faces for 0 F i (X Fp ), and if this condition is true, raise a flag and link this face with the corresponding Jacobian entry. If no faces are flagged during this check, then the regionx k must be subdivided before any region elimination can begin (a prudent choice of subdivision may be all that is necessary, rather than a time-consuming complete bisection in each dimension). 3. At each face x j = const that was flagged and linked to a Jacobian entry, a region can be eliminated in the j th dimension using the i th function F i analogously to the way it is eliminated in the one dimen-

Boundary-Based Interval Newton s Method 95 sional case. The ideal step can be found through binary search or estimated using 2 F i. Xj 2 4. Update X k to take into consideration the region elimination accomplished at each flagged face, and increment k. 5. Repeat steps 1 4 until a convergence criterion is met. Steps 2 and 3 above can be implemented in a parallel processing environment, since each check is an independent task; similarly, region eliminations at different faces can be done in parallel. 2.3 Two-dimensional example Consider the following nonlinear system: g(x,y) = x 2 +6x+y 2 6y +17, h(x,y) = y 2 6y x 2 6x ysin2x+3sin2x+ 1 4 sin2 2x. (6) All solutions to (6) lie in { (x,y) : 10 x 10, 10 y 10 }. Figure 2 shows the contour lines of g(x,y) = 0 (circle) and h(x,y) = 0 (wobbly cross). The dashed rectangles outline the outer convergence of the algorithm from X 0 = ( [ 10,10], [ 10,10] ) to the minimal rectangular hull containing all four solutions. In this example, the interval Jacobian matrix is singular over most of the region except for the squares that contain each solution and whose width is approximately equal to 1 5. The next step is to perform one bisection in each dimension to create four new problems. To each of the resulting domains, we then apply the same boundary-based algorithm. Figure 3 shows inner convergence of the algorithm to each solution. 3 Strategies for using boundary-based methods When a region is eliminated at a face X F the question arises concerning whether or not a larger region could have been eliminated if we used a

96 L. Simcik, P. Linz 10 Boundary Based Projection with Contours 8 6 4 2 Y 0-2 -4-6 -8-10 -10-5 0 5 10 X Figure 2: Contours and outer convergence 4 Boundary Based Projection with Contours 3.8 3.6 3.4 3.2 Y 3 2.8 2.6 2.4 2.2 2-4 -3.5-3 -2.5-2 X Figure 3: Contours and inner convergence

Boundary-Based Interval Newton s Method 97 boundary based interval bisection technique instead. Actually, bisection here is a misnomer for what more accurately should be called boundary-based region elimination using only information from F(X). For our general class of functions, there is no general theory that would recommend the choice of one method over the other for region elimination. However, the boundary based methods can be implemented in an adaptive scheme that dynamically tests for every non-zero entry of J(X), which method will eliminate a larger region per unit of work in each dimension with respect to each non-zero entry of J(X). For example, let s consider the case when for a k th iterate of the boundary based Newton method, in some dimension, Xj k = [ X k j,x k j], using F i (X), and F i X j, for one of the faces we eliminate a region γ j. If 0 F(X1 k,xk 2,...,γ j,...,xn k ), this means that in this case, the boundarybased bisection would have eliminated a larger region in one iteration. There is also a simple check that enables us to find out when bisection method is worse: it is sufficient to check for the existence of an ideal step in the region eliminated by bisection method. If there is no ideal step, then the boundary-based interval Newton method would have eliminated a larger region in one iteration. The number of floating point operations for one iteration of Newton s method and a bisection method are known a priori allowing the computer to decide which method is more computationally efficient (both methods use the same amount of computer memory, so when deciding which of the methods to use, we do not have to take memory into consideration). The above tests can be used in a decision strategy that can adaptively choose one of the boundary-based methods. Thus, an algorithm might be using boundary-based Newton s method in one dimension using one F i, and bisection in the same dimension using a different F i. There are many possibilities for a decision process based upon local region elimination. Additionally, the boundary-based methods can be adaptively combined with Krawczyk s method using known tests for safe-starting regions.

98 L. Simcik, P. Linz 4 Advantages and disadvantages The main advantage is that boundary-based methods do not require the inversion of a Jacobian matrix, and do not even require that we know an approximation to this inverse matrix. Additionally, the methods tend to rely upon bisection far less than Krawczyk s method; this fact reduces the storage requirements and the need for rejoining regions that were previously split up. The boundary-based methods have inherent adaptive strategies that allow dynamic choices between Newton s, bisection, and Krawczyk s method. Finally, there appears to be the potential for levels of parallelism, since region elimination at each face can be considered a separate problem. The main disadvantage of the boundary based Newton s method and bisection is the need to find the faces where 0 F i (X F ) and the subsequent reliance upon interval bisection when there are no suitable faces available. 5 Future research The boundary-based interval Newton s method needs a generalized convergence theory for the multi-dimensional problem. To make a significant improvement in current methods, the boundary-based techniques must eliminate regions more efficiently than traditional bisection and the Hansen- Sengupta method [3]. If this cannot be proven, then we must at least demonstrate adaptive schemes with the previously mentioned algorithms to augment their performance. Additionally, we need to demonstrate parallelism and quantify its contribution to the rate of convergence. Also, we need to uncover which classes of nonlinear systems and possibly sparse unpatterned linear systems are best suited for the boundary based methods. References [1] Moore, R. E. Methods and applications of interval analysis. SIAM, Philadelphia, 1979. [2] Alefeld, G. and Herzberger, J. Introduction to interval computations. Academic Press, New York, 1983.

Boundary-Based Interval Newton s Method 99 [3] Neumaier, A. Interval methods for systems of equations. Cambridge University Press, Cambridge, 1990. [4] Rump, S. M. Reliability in computing: the role of interval methods in scientific computing. In: Moore, R. E. (ed.) Algorithms for verified inclusions: theory and practice. Academic Press, New York, 1988. [5] Moore, R. E. and Jones, S. T. Safe starting regions for iterative methods. SIAM Journal of Numerical Analysis 14 (6) (1977), pp. 1051 1065. [6] Simcik, L. J. Ideal step interval Newton method. Master s Thesis, Department of Mathematics, University of California, Davis, 1992. L. Simcik University of California Department of Mathematics Davis, California 95616 USA P. Linz University of California Department of Computer Science Davis, California 95616 USA