Abbreviated title : Verication for nonsmooth equations 2

Size: px
Start display at page:

Download "Abbreviated title : Verication for nonsmooth equations 2"

Transcription

1 A Verication Method for Solutions of Nonsmooth Equations Xiaojun Chen School of Mathematics University of New South Wales Sydney, NSW 2052, Australia February 1995 Revised March 1996 Abstract A Verication Method for Solution of Nonsmooth Equations. This paper proposes a verication method for the existence of solutions of nonsmooth equations. We generalize the Krawczyk operator to nonsmooth equations by using the mean-value theorem for nonsmooth functions. We establish a semi-local convergence theorem for the generalized Newton method for nonsmooth equations. The proposed method is a combination of the generalized Krawczyk operator the semi-local convergence theorem. Zusammenfassung Eine Methode zur Verizierung von Losungen nicht-dierenzierbarer Gleichungen Dieser Artikel schlagt eine Methode zur Verizierung der Existenz von Losungen nicht-dierenzierbarer Gleichungen vor. Wir verallgemeinern den Krawczyk-Operator auf nicht-dierenzierbare Gleichungen indem wir den Mittelwertsatz fur nicht-dierenzierbare Funktionen benutzen. Wir entwickeln einen semilokalen Konvergenzsatz fur das verallgemeinerte Newton-Verfahren, angewendet auf nicht-dierenzierbare Gleichungen. Die vorgeschlagene Methode ist eine Kombination aus verallgemeinertem Krawczyk-Operator und dem semi-lokalen Konvergenzsatz. Key words: verication of solutions, nonsmooth equations AMS(MOS) subject classication : 65H10, 65G10, 90C33 This work is supported by the Australian Research Council. 1

2 Abbreviated title : Verication for nonsmooth equations 2

3 1 Introduction Nonsmooth equations provide a unied framework for the study of a number of important problems in numerical analysis mathematical programming. Although several techniques have been designed for solving nonsmooth equations [3], [4], [6], [7], [11], [12], [13], [14], [20], [21], [23], [24], [25], [26] little has been done to verify the existence of solutions of nonsmooth equations. One likely reason for this omission is the diculty of nondierentiability. Verication methods for solutions of smooth equations have been studied for several decades. These methods utilize the dierential mapping in algorithms numerical analysis [1], [2], [9], [10], [17], [22], [27], [28]. Although these methods are ecient for smooth equations, they are not applicable for nonsmooth equations. Example 1.1 Let f(x) = 0:780 0:563 0:913 0:659 x1 x 2? 0:217 0:254 ; g(x) = x 1 x F (x) = min(f(x); g(x)); where \min" denotes the componentwise minimum operator on a pair of vectors. The system of nonsmooth equations F (x) = 0 is equivalent to the linear complementarity problem [21]: f(x) 0; g(x) 0 f(x) T g(x) = 0: The point x =(1, -1) is the unique solution of F (x) = 0. The function F is nondierentiable at x. Using the numbers in f(x), we set ^x=(0.0, 0.254/0.659) x =(0.0, 0.217/0.563). It is easy to verify F (^x) = (?1:517 10?6 ; 0:0); F (x) = (0:0; 1:776 10?6 ); k^x? x k 1 = 1 + 0:254=0:659; kx? x k 1 = 1 + 0:217=0:563: Furthermore F is not dierentiable at x. The function f in this example was given by Forsyth (1970) [28]. A common numerical practice is to stop an algorithm for solving F (x) = 0 whenever the norm kf (x k )k is less than a given tolerance. However, as Example 1.1 shows, kf (x k )k 0 does not guarantee that x k is a good approximate solution. The goal of this paper is to give a practical verication method for solutions of nonlinear equations: F (x) = 0; (1.1) where the mapping F : < n < n is assumed to be locally Lipschitz continuous. 3

4 If F is dierentiable, a popular method for solving (1.1) is the Newton method x k+1 = x k? F 0 (x k )?1 F (x k ): (1.2) Several verication methods for solutions of smooth equations are based on the Newton method (1.2). Very recently, Alefeld-Gienger-Potra [1] presented a new stopping criterion for the Newton method (1.2) by combining the properties of the Krawczyk operator a corollary of the Newton-Kantorovich theorem. When this criterion is satised they used the last three Newton's iterates to compute an interval vector that is very likely to contain a solution of (1.1). In the nonsmooth case, F 0 (x k ) may not exist. The generalized Newton method uses generalized Jacobians of F to play the role of F 0 in the Newton method (1.2). From Rademacher's theorem, the local Lipschitz continuity of F implies that F is almost everywhere dierentiable. Let D F be the set where F is dierentiable. B F (x) = f lim xi x x i 2D F F 0 (x i )g: (1.3) The generalized Jacobian of F at x 2 < n in the sense of Clarke [8] is equal to the convex hull B F (x) = co@ B F (x); (1.4) which is a nonempty convex compact set. Note that the local Lipschitz continuity of F implies that F is F -dierentiable at x if only if it is G-dierentiable at x [8]. Therefore there is no dierence between F -dierentiable G-dierentiable in this paper. We say that F is BD-regular at x if all V B F (x) are nonsingular that F is BD-regular in a set D if F is BD-regular at any point x 2 D [21], [23], [25]. Here BD sts the subdierential dened in (1.3). This paper generalizes the technique in [1] to a BD-regular nonsmooth operator F in a domain gives a stopping criterion for the generalized Newton method : x k+1 = x k? V?1 k F (x k ); (1.5) where V k B F (x k ). Calculating an appropriate element V k B F (x k ) is based on the denition of the function F. For instance, see Example 5.1 Example 5.2 in section 5. The verication method in [1] is based on the Krawczyk operator the Newton-Kantorovich theorem, which requires a Lipschitz continuous dierentiability assumption on the function F. In this paper, we generalize the Krawczyk operator to nonsmooth equations. We establish a semi-local convergence theorem for the generalized Newton method (1.5). Our verication method is based on the generalized Krawczyk operator the semi-local convergence theorem. The remainder of the paper is organized as follows. In section 2 we discuss the generalized Krawczyk operator for nonsmooth equations. In section 3 we give a semi-local convergence theorem for the method (1.5). In section 4 we propose a verication method for solutions of nonsmooth equations. In section 5 we report numerical experiments. 4

5 S(x 0 ; r) denotes an open ball with the center x 0 2 R n the radius r > 0. S(x 0 ; r) denotes the closure of S(x 0 ; r). Interval vectors are denoted by [x]; [y]; ::: : Interval matrices are denoted by [L]; [M]; :::. The diameter of an interval [x] = [x; x] is d([x]) = x?x: The midpoint of an interval [x] = [x; x] is m([x]) = 1 2 (x+x): 2 A generalized Krawczyk operator Let F be dierentiable in an open domain D < n let [x] be an interval vector in D. The Krawczyk operator is dened by K([x]; x; A) = x? A?1 F (x) + (I? A?1 F 0 ([x]))([x]? x); (2.1) where x is a vector in [x], A is an nn nonsingular matrix F 0 ([x]) is an interval arithmetic evaluation of the derivative. If K([x]; x; A) [x], then F has a zero x in K([x]; x; A). To generalize the Krawczyk operator to nonsmooth equations, we use the following mean-value theorem. Theorem 2.1 [8] Let F be Lipschitz continuous on an open convex set D in < n, let x y be points in D. Then F (y)? F (x) 2 (xy)(y? x); where xy is the line segment between x y, Let Then Theorem 2.1 implies (xy) = cofv (u); u 2 xyg: ([x]) = cofv (x); x 2 [x]g: F (y)? F (x) 2 ([x])(y? x); for x; y 2 [x]: (2.2) Let [L [x] ] be an interval matrix such that ([x]) [L [x] ]. Then for any x; y 2 [x] < n there holds F (x)? F (y) 2 [L [x] ](x? y): (2.3) Let A be an n n nonsingular matrix let x be a vector in [x]. We call the mapping B([x]; x; A) = x? A?1 F (x) + (I? A?1 [L [x] ])([x]? x) (2.4) a generalized Krawczyk operator. One will naturally ask how to calculate the interval matrix [L [x] ]. If F is continuously dierentiable, the [L [x] ] can be dened by the interval arithmetic evaluation of the derivative F 0 ([x]). In the nonsmooth case, calculating [L [x] ] is based on the 5

6 generalized (x). For instance, we consider the linear complementarity problem Mx + p 0; Nx + q 0; (Mx + p) T (Nx + q) = 0; (2.5) where M; N 2 < nn, p; q 2 < n. The linear complementarity problem (2.5) is equivalent to the nonsmooth equations F (x) = min(mx + p; Nx + q) = 0: (2.6) The nonsmoothness happens when there is an i, 1 i n such that (Mx + p) i = (Nx + q) i M i 6= N i. Here M i N i are the ith rows of M N respectively. By the denition of the generalized Jacobian (1.3)-(1.4), where i (x) = >: M i if (Mx + q) i < (Nx + p) i N i if (Nx + q) i > (Nx + p) i [l i ; l i ] if (Mx + q) i = (Nx + q) i ; l i = (minfm i;1 ; N i;1 g; ; minfm i;n ; N i;n g) l i = (maxfm i;1 ; N i;1 g; ; maxfm i;n ; N i;n g): Hence the interval matrix [L [x] ] for the function F in (2.6) can be dened by [L [x] ] = [l; l]; l i;j = minfm i;j ; N i;j g; l i;j = maxfm i;j ; N i;j g: Example 1.1 is a linear complementarity problem, the interval matrix for the concrete example has the form [0:780; 1] [0; 0:563] [L [x] ] = : [0; 0:913] [0:659; 1] Notice that the interval matrices satisfying (2.3) are not unique. The optimal interval matrices for the generalized Krawczyk operator have the smallest diameter d([l [x] ]). Now we give an interval test for solutions of nonsmooth equations. Theorem 2.2 Assume that F is Lipschitz continuous in an open domain D. 1. If B([x]; x; A) [x] D, then there is a solution of (1.1) in B([x]; x; A): 2. If B([x]; x; A) \ [x] = ;, then there is no solution of (1.1) in [x]. Proof: The application of Theorem 2.1 yields that for any y 2 [x] y? A?1 F (y) = x? A?1 F (x) + y? x? A?1 (F (y)? F (x)) 2 x? A?1 F (x) + y? x? A?1 (xy)(y? x) x? A?1 F (x) + (I? A?1 [L [x] ])(y? x) x? A?1 F (x) + (I? A?1 [L [x] ])([x]? x) = B([x]; x; A): (2.7) Since B([x]; x; A) [x], Brouwer's xed point theorem [19] implies that there is a solution x of (1.1) in [x]. From (2.7), x = x? A?1 F (x ) 2 B([x]; x; A). Thus Statement 1 holds. Statement 2 is a direct corollary of Statement 1. 6

7 3 The semi-local convergence theorem In this section we give a semi-local convergence theorem for the generalized Newton method (1.5). This theorem gives sucient conditions for the convergence of the method (1.5) starting at x 0. Moreover, it provides an error estimate. Theorem 3.1 Let F be BD-regular in a ball S(x 0 ; r). Assume that there exists a constant 2 (0; 1) such that for any x; y 2 S(x 0 ; r); V x B F (x); V y B F (y), kv?1 y (F (y)? F (x)? V x (y? x))k kx? yk: (3.1) Let V 0 B F (x 0 ) 0 = kv?1 0 F (x 0 )k. If 0 (1? )r; then the sequence fx k g generated by the method (1.5) with starting point x 0 is well-dened, remains in S(x 0 ; 0 =(1? )) converges to a solution in S(x 0 ; 0 =(1? )). Furthermore, we have the error estimate kx k? x k 1? kx k? x k?1k; k = 1; 2; ::: : (3.2) Proof: Let r = 0 =(1? ). First we show that the sequence fx k g lies in S(x 0 ; r). From kx 1? x 0 k = kv?1 0 F (x 0 )k = 0 < 0 =(1? ) = r; we have x 1 2 S(x 0 ; r). Suppose that x 0 ; x 1 ; :::; x k 2 S(x 0 ; r). Then kx k+1? x k k = kv?1 k F (x k )k = kv?1 k (F (x k )? F (x k?1)? V k?1(x k? x k?1))k kx k? x k?1k k kx 1? x 0 k = k 0 : Hence kx k+1? x 0 k kx kx i+1? x i k 0 kx i=0 i=0 i 0 1? = r < r; which implies x k+1 2 S(x 0 ; r). Therefore, the sequence fx k g lies in S(x 0 ; r). For any two integers k 0 p > 0, we have kx k+p+1? x k k k+p X i=k kx i+1? x i k 0 k+p X i=k i k r: This shows that fx k g is a Cauchy sequence. Hence the generalized Newton method (1.5) converges to a point x 2 S(x 0 ; r). Since F is Lipschitz continuous in S(x 0 ; r), kv k k is uniformly bounded. Hence thus F (x ) = 0. kf (x )k = lim k1 kf (x k)k lim k1 kv kkkx k+1? x k k = 0; 7

8 The error estimate (3.2) follows from kx k+p+1? x k k k+p X i=k px i=0 kx i+1? x i k i+1 kx k? x k?1k 1? kx k? x k?1k: Corollary 3.2 Suppose that the conditions of Theorem 3.1 hold. Assume, further, that for any x; y 2 S(x 0 ; r) with F (x) = F (y), there are a V x B F (x) a V y B F (y) such that min(kv?1 y V x k; kv?1 x V y k) < 1 : (3.3) Then S(x 0 ; r) only contains one solution x of (1.1). Proof: Suppose that there are two solutions x ; y 2 S(x 0 ; r). Suppose that V x B F (x ) V y B F (y ) satisfy (3.3). Then B = V?1 y V x is nonsingular with B?1 = V?1 x V y. Furthermore, the following inequalities hold: kb(y? x )k = kv?1 y (F (y )? F (x )? V x (y? x ))k ky? x k (3.4) kb?1 (x? y )k = kv?1 x (F (x )? F (y )? V y (x? y ))k kx? y k: (3.5) By (3.4)-(3.5), we have Hence kx? y k = kbb?1 (x? y )k kbkkb?1 (x? y )k kbkkx? y k kx? y k = kb?1 B(x? y )k kb?1 kkb(x? y )k kb?1 kkx? y k: This implies kx? y k min(kbk; kb?1 k)kx? y k: kx? y k = 0; i.e., x = y : Hence x is the unique solution of (1.1) in S(x 0 ; r). We say that F is semismooth at x if lim V 2@F (x+th) t#0 8 fv hg

9 exists for any h 2 R n. If F is semismooth at x then kf (x + h)? F (x)? V x hk = o(khk); where V x B F (x). If F is BD-regular at x, then there are a neighborhood N x of x a constant c such that F is BD-regular in N x for any y 2 N x ; sup V 2@ B F (y) kv?1 k c. Furthermore if F is semismooth BD-regular at x, then the generalized Newton method (1.5) superlinearly converges to a solution x of (1.1) [23], [25]. Semismoothness was originally introduced by Miin [16]. There are a number of functions which are semismooth but not dierentiable, e.g. the piecewise smooth functions [21], [23], [25]. If F is semismooth in a neighborhood N x of x F is BD-regular at x, then for any 2 (0; 1), there exists an ball S(x ; r) N x such that the conditions of Theorem 3.1 hold. 4 A verication method Alefeld-Gienger-Potra [1] established the following theorem on the Krawczyk operator. Theorem 4.1 [1] Assume that the mapping F : D < n < n is dierentiable the derivative has an interval arithmetic evaluation F 0 ([x]) for all [x] D such that kd(f 0 ([x]))k 1 kd([x])k 1 ; [x] D; (4.1) for some 0. If A 2 F 0 ([x]), then the inequality holds with = ka?1 k 1 : kd(k([x]; x; A))k 1 kd([x])k 2 1 The verication method proposed in [1] is based on Theorem 4.1 the Newton-Kantorovich theorem. It is well-known that the Newton-Kantorovich theorem assumes that F is dierentiable in D kf 0 (x)? F 0 (y)k kx? yk; for x; y 2 D (4.2) for some > 0. Note that conditions (4.1) (4.2) cannot be simply generalized to when F is not dierentiable in D. Proposition 4.2 Assume that F is Lipschitz continuous in D. If for all [x] D, then F is continuously dierentiable in D. kd([l [x] ])k 1 kd([x])k 1 ; (4.3) 9

10 Proof: For any point x 2 D, [x] = [x; x] = x is a degenerate interval. Since d(x) = 0, (4.3) implies d([l [x] ]) = 0. By the denition of [L [x] ], the (x) reduces to a singleton. From Proposition in [8], F is continuously dierentiable in D F 0 (x) (x): Similarly, the following assumption k@f (y)k kx? yk; for x; y 2 D implies that F is continuously dierentiable in D. The verication method proposed in this paper is based on the generalized Krawczyk operator (2.4) Theorem 3.1. Assume that the generalized Newton method (1.5) converges to x F is semismooth in a neighborhood N x of x. Then the method (1.5) is superlinear convergent for any 2 (0; 1) there is an integer k 0 > 0 such that for any k k 0, kv?1 k+1(f (x k+1 )? F (x k )? V k (x k+1? x k ))k kx k+1? x k k kx k+1? x k 1? kx k+1? x k k: Therefore, we consider the interval [x] = fx j kx? x k+1 k 1 k g = x k+1 + [? k e; k e]; where k = kx k+1? x k k 1 e = (1; 1; :::; 1) T. The diameter of [x] is d([x]) = 2 k e the midpoint of [x] is m([x]) = x k+1. Using [x], x k+1 an n n nonsingular matrix A, we construct a generalized Krawczyk operator B([x]; x k+1 ; A) = x k+1? A?1 F (x k+1 ) + (I? A?1 [L [x] ])([x]? x): Then the diameter the midpoint of B([x]; x k+1 ; A) have the form: d(b([x]; x k+1 ; A)) = d((i? A?1 [L [x] ])([x]? x k+1 )) m(b([x]; x k+1 ; A)) = x k+1? A?1 F (x k+1 ): Noticing ([x] + [y])[z] [x][z] + [y][z] [x]? x k+1 = [? k e; k e], we calculate (I? A?1 [L [x] ])([x]? x k+1 ) = ( nx j=1 (I? A?1 [L [x] ]) 1j ; : : : ; nx j=1 (I? A?1 [L [x] ]) nj ) T [? k ; k ]: Theorem 4.3 Assume that F is Lipschitz continuous. Then if only if B([x]; x k+1 ; A) [x] (4.4) ka?1 F (x k+1 )k 1 (1? k(i? A?1 [L [x] ])ek 1 ) k : (4.5) 10

11 Proof: First we show that (4.4) is equivalent to ka?1 F (x k+1 )k kd((i? A?1 [L [x] ])([x]? x k+1 ))k 1 k : (4.6) Let p(x) = x? A?1 F (x): Then p(x k+1 ) = m(b([x]; x k+1 ; A)): If (4.4) holds, then ka?1 F (x k+1 )k 1 = km([x])? p(x k+1 )k (kd([x])k 1? kd(b([x]; x k+1 ; A))k 1 ); thus (4.6) holds. If (4.6) holds, then for any x 2 B([x]; x k+1 ; A); kx? x k+1 k 1 kx? p(x k+1 )k 1 + kp(x k+1 )? x k+1 k kd(b([x]; x k+1; A))k 1 + ka?1 F (x k+1 )k 1 k ; thus (4.4) holds. Now we show that (4.5) (4.6) are equivalent. As [x]? x k+1 is a symmetric interval, Theorem 10 in Chapter 2 of [2] implies kd((i? A?1 [L [x] ])([x]? x k+1 ))k 1 = kd(( = k(j = max i nx j=1 nx nx j=1 j (I? A?1 [L [x] ]) 1j ; : : : ; (I? A?1 [L [x] ]) 1j j; : : : ; j j=1 (I? A?1 [L [x] ])j2 k : nx nx j=1 j=1 Hence the three inequalities (4.4)-(4.6) are equivalent. (I? A?1 [L [x] ]) nj ) T [? k ; k ])k 1 (I? A?1 [L [x] ]) nj jk 1 2 k Now we are ready to propose the verication method. Algorithm 4.4 Let be a given tolerance. Let x k be a cidate for a solution to (1.1), dene z = x k? V?1 k F (x k ) = kz? x k k 1. Choose an n n nonsingular matrix A. Dene [x] = z + [?e; e]; e = (1; 1; :::; 1) T, calculate [L [x] ]. If ka?1 F (z)k (4.7) k(i? A?1 [L [x] ])ek 1 1? ; (4.8) then (1.1) has a solution x 2 B([x]; z; A): The cidate x k can be obtained by an appropriate method for solving nonsmooth equations [3], [6], [7], [11], [12], [20], [21], [22], [24], [25], [26]. The inequality (4.7) is a stopping criterion for an algorithm for solving the nonsmooth equations (1.1). We cannot guarantee that if (4.7) is satised, then the interval [x] will contain a solution x of (1.1). However, it is reasonable that the interval [x] is very likely to contain a solution x. Furthermore, the inequality (4.7) can be checked easily. 11

12 5 Numerical experiments First we illustrate Algorithm 4.4 by Example 1.1. Next we test our method by a test problem from [15]. As we will see in the numerical examples, the stopping criterion (4.7) works well in practice. The numerical results were obtained using Matlab 4.2c on a Sun 2000 workstation. The rounded error in Matlab is 10?16. To give safe error bounds, condition (4.8) was considered as k(i?a?1 [L [x] ])ek 1 < 1? - rounded error. Example 5.1 To illustrate Algorithm 4.4, we consider Example 1.1. Let x 0 = x = (0:0; 0:217=0:563) = 10?7. We use the generalized Newton method (1.5) to solve this problem. We choose A = I. The numerical result is as follows: ka?1 F (x 1 )k 1 kx 1? x 0 k 1 = 2: ?1 ; ka?1 F (x 2 )k 1 kx 2? x 1 k 1 = 4: ?17 : Let z = x 2 = (1: ;?1: ) T go to step 1 of Algorithm 4.4. = kx 2? x 1 k 1 = 1:3854; [x] = z + [?e; e], [L [x] ] = ; (I? A?1 [L [x] ])e = = 1? [0:780; 1:563] 1? [0:659; 1:913] [0:780; 1] [0; 0:563] [0; 0:913] [0:659; 1] 1? [0:780; 1]?[0; 0:563]?[0; 0:913] 1? [0:659; 1] = ; [?0:563; 0:220] [?0:913; 0:341] k(i? A?1 [L [x] ])ek 1 = 0:915 1? : 1?1 [?0:563; 0:563] [?0:915; 0:915] 1 1 From ka?1 F (x 2 )k 1 =kx 2? x 1 k 1 + k(i? A?1 [L [x] ])ek 1 1? 10?2, we know that there is a solution x of (1.1) in B([x]; z; A) = + [x]: We can make d(b([x]; z; A)) smaller by choosing an appropriate matrix A. For example, we choose A =diag(1=0:8; 1=0:7). Then (I? A?1 [L [x] ])e = B([x]; z; A) = 1? 0:8[0:780; 1:563] 1? 0:7[0:659; 1:913] 1?1 + = [?0:3760; 0:3760] [?0:5387; 0:5387] [?0:2504; 0:3760] [?0:3391; 0:5387] : 12

13 Example 5.2 Extended Powell Badly Scaled Function [15]. Recently Luksan [15] published a collection of 17 test problems for iterative solution of smooth equations. From among these we choose the extended Powell badly scaled function which is dened by with (x) = ( 1 (x); :::; n (x)) T i (x) = 10000x i x i+1? 1 i (x) = exp(?x i?1) + exp(?x i )? 1:0001 if i is odd if i is even, where n is an even number. Let x = (0; 1; 0; 1; :::; ) T 2 < n, f(x) = (x)? (x ) g(x) = x: We dene F (x) = min(f(x); x): The point x is a solution of the system F (x) = 0. As f i (x ) = g i (x ), fi(x 0 ) 6= gi(x 0 ) for all odd numbers i in f1,2,...,ng, F is not dierentiable at x. We chose V k = ((V k ) 1 ; :::; (V k ) n ) T with ( g 0 (V k ) i = i(x k ) if f i (x k ) g i (x k ) fi(x 0 k ) otherwise, i = 1; :::; n: (5.1) We then have V k B F (x k ) from the denition B F (x k ). The interval matrix is dened by [L [x] ] = [l; l], where l i;j = minff 0 i;j(x); g 0 i;j(x); x 2 [x]g l i;j = maxff 0 i;j(x); g 0 i;j(x); x 2 [x]g: We used the generalized Newton method (1.5) to solve this problem dene V k by (5.1). We chose A =diag (10 5 ; 10 2 ; 10 5 ; 10 2 ; ::::; ) = 10?7. The initial point x 0 2 (0; 1) was romly generated. The example is characterized by the parameters listed in Table 1. Table 2 gives some typical results for various values of n. From k + k < 1??5, we know that there is a solution in [x] = x k+1 + [? k e; k e]; where e = (1; 1; :::; 1) T : n number of unknowns equations k + 1 number of iterations until (4.7) is fullled with z = x k+1 k kx k+1? x k k 1 k ka?1 F (x k+1 )k 1 = k k k(i? A?1 [L [x] ])ek 1 Table 1: Parameters in Table 2 Acknowledgements The author thanks T. Yamamoto two referees for their valuable comments. 13

14 n k + 1 k k k : ?5 4: ?8 7: ? : ?5 7: ?8 6: ? : ?5 1: ?7 9: ? : ?5 1: ?7 9: ?1 Table 2: Numerical results for Example 5.2 References [1] G. Alefeld, A. Gienger F. Potra, Ecient numerical validation of solutions of nonlinear systems, SIAM J. Numerical Analysis 31(1994) [2] G. Alefeld J. Herzberger, Introduction to Interval Computations, Academic Press, New York London, [3] X. Chen, On the convergence of Broyden-like methods for nonlinear equations with nondierentiable terms, Annals of the Institute of Statistical Mathematics 42(1990) [4] X. Chen, Convergence of the BFGS method for LC 1 convex constrained optimization, to appear in: SIAM Journal Control & Optimization. [5] X. Chen D. Wang, On the optimal properties of the Krawczyk-type interval operator, International J. Computer Mathematics 29(1989) [6] X. Chen, M.Z. Nashed L. Qi, Convergence of Newton's method for singular smooth nonsmooth equations using outer inverses, to appear in: SIAM J. Optimization. [7] X. Chen T. Yamamoto, On the convergence of some quasi-newton methods for nonlinear equations with nondierentiable operators, Computing 49(1992) [8] F. H. Clarke, Optimization Nonsmooth Analysis, John Wiley, New York, [9] A. Frommer G. Mayer, Safe bounds for the solutions of nonlinear problems using a parallel multisplitting method, Computing 42(1989) [10] A. Frommer G. Mayer, On the R-order of Newton-like methods for enclosing solutions of nonlinear equations, SIAM J. Numerical Analysis 27(1990) [11] S. P. Han, J. S. Pang N. Rangaraj, Globally convergent Newton methods for nonsmooth equations, Mathematics of Operation Research 17(1992)

15 [12] C. M. Ip J. Kyparisis, Local convergence of quasi -Newton methods for B-dierentiable equations, Mathematical Programming 56(1992) [13] M. Heinkenschloss, C. T. Kelley H. T. Tran, Fast algorithms for nonsmooth compact xed point problems, SIAM J. Numerical Analysis 29(1992) [14] B. Kummer, Newton's method for non-dierentiable functions, in J. Guddat, B. Bank, H. Hollatz, P. Kall, D. Klatte, B. Kummer, K. Lommatzsch, L. Tammer, M. Vlach K. Zimmerman, eds., Advances in Mathematical Optimization, Akademi-Verlag, Berlin, [15] L. Luksan, Inexact trust region method for large sparse systems of nonlinear equations, J. Optimization Theory Applications 81(1994) [16] R. Miin, Simesmooth semiconvex functions in constrained optimization, SIAM J. Control & Optimization 15(1977) [17] R. E. Moore L. Qi, A successive interval test for nonlinear systems, SIAM J. Numerical Analysis 19(1982) [18] M. Z. Nashed X. Chen, Convergence of Newton-like methods for singular operator equations using outer inverses, Numerische Mathematik 66(1993) [19] J. M. Ortega W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York London, [20] J. S. Pang, Newton methods for B-dierentiable equations, Mathematics of Operations Research 15(1990) [21] J. S. Pang L. Qi, Nonsmooth equations: motivation algorithms, SIAM J. Optimization 3(1993) [22] L. Qi, A note on the Moore test for nonlinear system, SIAM J. Numerical Analysis 19(1982) [23] L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Mathematics of Operations Research 18(1993) [24] L. Qi X. Chen, A globally convergent successive approximation method for nonsmooth equations, SIAM J. Control & Optimization 33(1995) [25] L. Qi J. Sun, A nonsmooth version of Newton's method, Mathematical Programming 58(1993) [26] D. Ralph, Global convergence of damped Newton's method for nonsmooth equations via the path search, Mathematics of Operations Research 19(1994)

16 [27] T. Yamamoto, A method for nding sharp error bounds for Newton's method under Kantorovich assumptions, Numerische Mathematik 49(1986) [28] T. Yamamoto X. Chen, Validated methods for solving nonlinear systems, Information Processing 31(1990)

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015 Convex Optimization - Chapter 1-2 Xiangru Lian August 28, 2015 1 Mathematical optimization minimize f 0 (x) s.t. f j (x) 0, j=1,,m, (1) x S x. (x 1,,x n ). optimization variable. f 0. R n R. objective

More information

for Approximating the Analytic Center of a Polytope Abstract. The analytic center of a polytope P + = fx 0 : Ax = b e T x =1g log x j

for Approximating the Analytic Center of a Polytope Abstract. The analytic center of a polytope P + = fx 0 : Ax = b e T x =1g log x j A Lagrangian Relaxation Method for Approximating the Analytic Center of a Polytope Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1992 Abstract. The analytic center of a polytope P + = fx

More information

A Note on Smoothing Mathematical Programs with Equilibrium Constraints

A Note on Smoothing Mathematical Programs with Equilibrium Constraints Applied Mathematical Sciences, Vol. 3, 2009, no. 39, 1943-1956 A Note on Smoothing Mathematical Programs with Equilibrium Constraints M. A. Tawhid Department of Mathematics and Statistics School of Advanced

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Discrete Cubic Interpolatory Splines

Discrete Cubic Interpolatory Splines Publ RIMS, Kyoto Univ. 28 (1992), 825-832 Discrete Cubic Interpolatory Splines By Manjulata SHRIVASTAVA* Abstract In the present paper, existence, uniqueness and convergence properties of a discrete cubic

More information

Unconstrained Optimization

Unconstrained Optimization Unconstrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Denitions Economics is a science of optima We maximize utility functions, minimize

More information

control polytope. These points are manipulated by a descent method to compute a candidate global minimizer. The second method is described in Section

control polytope. These points are manipulated by a descent method to compute a candidate global minimizer. The second method is described in Section Some Heuristics and Test Problems for Nonconvex Quadratic Programming over a Simplex Ivo Nowak September 3, 1998 Keywords:global optimization, nonconvex quadratic programming, heuristics, Bezier methods,

More information

= f (a, b) + (hf x + kf y ) (a,b) +

= f (a, b) + (hf x + kf y ) (a,b) + Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals

More information

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33 Convex Optimization 2. Convex Sets Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2018 SJTU Ying Cui 1 / 33 Outline Affine and convex sets Some important examples Operations

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Regularity Analysis of Non Uniform Data

Regularity Analysis of Non Uniform Data Regularity Analysis of Non Uniform Data Christine Potier and Christine Vercken Abstract. A particular class of wavelet, derivatives of B-splines, leads to fast and ecient algorithms for contours detection

More information

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24 Convex Optimization Convex Sets ENSAE: Optimisation 1/24 Today affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes

More information

consisting of compact sets. A spline subdivision scheme generates from such

consisting of compact sets. A spline subdivision scheme generates from such Spline Subdivision Schemes for Compact Sets with Metric Averages Nira Dyn and Elza Farkhi Abstract. To dene spline subdivision schemes for general compact sets, we use the representation of spline subdivision

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

Preprint Stephan Dempe, Alina Ruziyeva The Karush-Kuhn-Tucker optimality conditions in fuzzy optimization ISSN

Preprint Stephan Dempe, Alina Ruziyeva The Karush-Kuhn-Tucker optimality conditions in fuzzy optimization ISSN Fakultät für Mathematik und Informatik Preprint 2010-06 Stephan Dempe, Alina Ruziyeva The Karush-Kuhn-Tucker optimality conditions in fuzzy optimization ISSN 1433-9307 Stephan Dempe, Alina Ruziyeva The

More information

UNIVERSITÄT KARLSRUHE. Verified Numerical Computation for Nonlinear Equations

UNIVERSITÄT KARLSRUHE. Verified Numerical Computation for Nonlinear Equations UNIVERSITÄT KARLSRUHE Verified Numerical Computation for Nonlinear Equations Götz Alefeld Preprint Nr. 08/10 Institut für Wissenschaftliches Rechnen und Mathematische Modellbildung z W R M M 76128 Karlsruhe

More information

Convex Sets. CSCI5254: Convex Optimization & Its Applications. subspaces, affine sets, and convex sets. operations that preserve convexity

Convex Sets. CSCI5254: Convex Optimization & Its Applications. subspaces, affine sets, and convex sets. operations that preserve convexity CSCI5254: Convex Optimization & Its Applications Convex Sets subspaces, affine sets, and convex sets operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

A new 8-node quadrilateral spline finite element

A new 8-node quadrilateral spline finite element Journal of Computational and Applied Mathematics 195 (2006) 54 65 www.elsevier.com/locate/cam A new 8-node quadrilateral spline finite element Chong-Jun Li, Ren-Hong Wang Institute of Mathematical Sciences,

More information

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection Paul Armand 1 Ngoc Nguyen Tran 2 Institut de Recherche XLIM Université de Limoges Journées annuelles

More information

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Definition 1.1. Let X be a set and T a subset of the power set P(X) of X. Then T is a topology on X if and only if all of the following

More information

On the positive semidenite polytope rank

On the positive semidenite polytope rank On the positive semidenite polytope rank Davíd Trieb Bachelor Thesis Betreuer: Tim Netzer Institut für Mathematik Universität Innsbruck February 16, 017 On the positive semidefinite polytope rank - Introduction

More information

Linear Bilevel Programming With Upper Level Constraints Depending on the Lower Level Solution

Linear Bilevel Programming With Upper Level Constraints Depending on the Lower Level Solution Linear Bilevel Programming With Upper Level Constraints Depending on the Lower Level Solution Ayalew Getachew Mersha and Stephan Dempe October 17, 2005 Abstract Focus in the paper is on the definition

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998

J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998 Density Estimation using Support Vector Machines J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report CSD-TR-97-3 February 5, 998!()+, -./ 3456 Department of Computer Science

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 1 A. d Aspremont. Convex Optimization M2. 1/49 Today Convex optimization: introduction Course organization and other gory details... Convex sets, basic definitions. A. d

More information

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION CHAPTER 5 SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION Alessandro Artale UniBZ - http://www.inf.unibz.it/ artale/ SECTION 5.5 Application: Correctness of Algorithms Copyright Cengage Learning. All

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

The Solution Set of Interval Linear Equations Is Homeomorphic to the Unit Cube: An Explicit Construction

The Solution Set of Interval Linear Equations Is Homeomorphic to the Unit Cube: An Explicit Construction The Solution Set of Interval Linear Equations Is Homeomorphic to the Unit Cube: An Explicit Construction J. Rohn Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic rohn@cs.cas.cz

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 6

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 6 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 6 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 19, 2012 Andre Tkacenko

More information

Pairs with no starting point. input with starting point. Pairs with curve piece or startng point. disjoint pairs. small pairs. Resulting.

Pairs with no starting point. input with starting point. Pairs with curve piece or startng point. disjoint pairs. small pairs. Resulting. reliable starting vectors for curve calculation. So far we did not try any algorithm to handle these patch pairs; we just kept their diameter " smaller than the display precision and treated these pairs

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces Chapter 6 Curves and Surfaces In Chapter 2 a plane is defined as the zero set of a linear function in R 3. It is expected a surface is the zero set of a differentiable function in R n. To motivate, graphs

More information

Polynomials tend to oscillate (wiggle) a lot, even when our true function does not.

Polynomials tend to oscillate (wiggle) a lot, even when our true function does not. AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 2: Spline Approximations Dianne P O Leary c 2001, 2002, 2007 Piecewise polynomial interpolation Piecewise polynomial interpolation Read: Chapter 3 Skip:

More information

Chapter 7. Nearest Point Problems on Simplicial Cones 315 where M is a positive denite symmetric matrix of order n. Let F be a nonsingular matrix such

Chapter 7. Nearest Point Problems on Simplicial Cones 315 where M is a positive denite symmetric matrix of order n. Let F be a nonsingular matrix such Chapter 7 NEAREST POINT PROBLEMS ON SIMPLICIAL CONES Let ; = fb. 1 ::: B. n g be a given linearly independent set of column vectors in R n, and let b 2 R n be another given column vector. Let B = (B. 1

More information

Convergence of C 2 Deficient Quartic Spline Interpolation

Convergence of C 2 Deficient Quartic Spline Interpolation Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 10, Number 4 (2017) pp. 519-527 Research India Publications http://www.ripublication.com Convergence of C 2 Deficient Quartic Spline

More information

arxiv: v3 [math.oc] 3 Nov 2016

arxiv: v3 [math.oc] 3 Nov 2016 BILEVEL POLYNOMIAL PROGRAMS AND SEMIDEFINITE RELAXATION METHODS arxiv:1508.06985v3 [math.oc] 3 Nov 2016 JIAWANG NIE, LI WANG, AND JANE J. YE Abstract. A bilevel program is an optimization problem whose

More information

Quasilinear First-Order PDEs

Quasilinear First-Order PDEs MODULE 2: FIRST-ORDER PARTIAL DIFFERENTIAL EQUATIONS 16 Lecture 3 Quasilinear First-Order PDEs A first order quasilinear PDE is of the form a(x, y, z) + b(x, y, z) x y = c(x, y, z). (1) Such equations

More information

Generalized Nash Equilibrium Problem: existence, uniqueness and

Generalized Nash Equilibrium Problem: existence, uniqueness and Generalized Nash Equilibrium Problem: existence, uniqueness and reformulations Univ. de Perpignan, France CIMPA-UNESCO school, Delhi November 25 - December 6, 2013 Outline of the 7 lectures Generalized

More information

Boundary-Based Interval Newton s Method. Интервальный метод Ньютона, основанный на границе

Boundary-Based Interval Newton s Method. Интервальный метод Ньютона, основанный на границе Interval Computations No 4, 1993 Boundary-Based Interval Newton s Method L. Simcik and P. Linz The boundary based method for approximating solutions to nonlinear systems of equations has a number of advantages

More information

Projection of a Smooth Space Curve:

Projection of a Smooth Space Curve: Projection of a Smooth Space Curve: Numeric and Certified Topology Computation Marc Pouget Joint work with Rémi Imbach Guillaume Moroz 1 Projection and apparent contour 3D curve = intersection of 2 implicit

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

All 0-1 Polytopes are. Abstract. We study the facial structure of two important permutation polytopes

All 0-1 Polytopes are. Abstract. We study the facial structure of two important permutation polytopes All 0-1 Polytopes are Traveling Salesman Polytopes L.J. Billera and A. Sarangarajan y Abstract We study the facial structure of two important permutation polytopes in R n2, the Birkho or assignment polytope

More information

= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral.

= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral. 1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose

More information

An Introduction to Numerical Analysis

An Introduction to Numerical Analysis Weimin Han AMCS & Dept of Math University of Iowa MATH:38 Example 1 Question: What is the area of the region between y = e x 2 and the x-axis for x 1? Answer: Z 1 e x 2 dx = : 1.9.8.7.6.5.4.3.2.1 1.5.5

More information

Sets. De Morgan s laws. Mappings. Definition. Definition

Sets. De Morgan s laws. Mappings. Definition. Definition Sets Let X and Y be two sets. Then the set A set is a collection of elements. Two sets are equal if they contain exactly the same elements. A is a subset of B (A B) if all the elements of A also belong

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

arxiv: v1 [math.na] 20 Jun 2014

arxiv: v1 [math.na] 20 Jun 2014 Iterative methods for the inclusion of the inverse matrix Marko D Petković University of Niš, Faculty of Science and Mathematics Višegradska 33, 18000 Niš, Serbia arxiv:14065343v1 [mathna 20 Jun 2014 Miodrag

More information

Bounds on the signed domination number of a graph.

Bounds on the signed domination number of a graph. Bounds on the signed domination number of a graph. Ruth Haas and Thomas B. Wexler September 7, 00 Abstract Let G = (V, E) be a simple graph on vertex set V and define a function f : V {, }. The function

More information

THE GROWTH OF LIMITS OF VERTEX REPLACEMENT RULES

THE GROWTH OF LIMITS OF VERTEX REPLACEMENT RULES THE GROWTH OF LIMITS OF VERTEX REPLACEMENT RULES JOSEPH PREVITE, MICHELLE PREVITE, AND MARY VANDERSCHOOT Abstract. In this paper, we give conditions to distinguish whether a vertex replacement rule given

More information

arxiv: v1 [math.co] 24 Aug 2009

arxiv: v1 [math.co] 24 Aug 2009 SMOOTH FANO POLYTOPES ARISING FROM FINITE PARTIALLY ORDERED SETS arxiv:0908.3404v1 [math.co] 24 Aug 2009 TAKAYUKI HIBI AND AKIHIRO HIGASHITANI Abstract. Gorenstein Fano polytopes arising from finite partially

More information

Definition. Given a (v,k,λ)- BIBD, (X,B), a set of disjoint blocks of B which partition X is called a parallel class.

Definition. Given a (v,k,λ)- BIBD, (X,B), a set of disjoint blocks of B which partition X is called a parallel class. Resolvable BIBDs Definition Given a (v,k,λ)- BIBD, (X,B), a set of disjoint blocks of B which partition X is called a parallel class. A partition of B into parallel classes (there must be r of them) is

More information

Chapter 15 Vector Calculus

Chapter 15 Vector Calculus Chapter 15 Vector Calculus 151 Vector Fields 152 Line Integrals 153 Fundamental Theorem and Independence of Path 153 Conservative Fields and Potential Functions 154 Green s Theorem 155 urface Integrals

More information

Block-based Thiele-like blending rational interpolation

Block-based Thiele-like blending rational interpolation Journal of Computational and Applied Mathematics 195 (2006) 312 325 www.elsevier.com/locate/cam Block-based Thiele-like blending rational interpolation Qian-Jin Zhao a, Jieqing Tan b, a School of Computer

More information

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION

SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION CHAPTER 5 SEQUENCES, MATHEMATICAL INDUCTION, AND RECURSION Copyright Cengage Learning. All rights reserved. SECTION 5.5 Application: Correctness of Algorithms Copyright Cengage Learning. All rights reserved.

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Hausdorff Institute for Mathematics (HIM) Trimester: Mathematics of Signal Processing

More information

if for every induced subgraph H of G the chromatic number of H is equal to the largest size of a clique in H. The triangulated graphs constitute a wid

if for every induced subgraph H of G the chromatic number of H is equal to the largest size of a clique in H. The triangulated graphs constitute a wid Slightly Triangulated Graphs Are Perfect Frederic Maire e-mail : frm@ccr.jussieu.fr Case 189 Equipe Combinatoire Universite Paris 6, France December 21, 1995 Abstract A graph is triangulated if it has

More information

Signed domination numbers of a graph and its complement

Signed domination numbers of a graph and its complement Discrete Mathematics 283 (2004) 87 92 www.elsevier.com/locate/disc Signed domination numbers of a graph and its complement Ruth Haas a, Thomas B. Wexler b a Department of Mathematics, Smith College, Northampton,

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1 Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients

More information

Two-phase matrix splitting methods for asymmetric and symmetric LCP

Two-phase matrix splitting methods for asymmetric and symmetric LCP Two-phase matrix splitting methods for asymmetric and symmetric LCP Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University Joint work with Feng, Nocedal, and Pang

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

The Divergence Theorem

The Divergence Theorem The Divergence Theorem MATH 311, Calculus III J. Robert Buchanan Department of Mathematics Summer 2011 Green s Theorem Revisited Green s Theorem: M(x, y) dx + N(x, y) dy = C R ( N x M ) da y y x Green

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

Finding a winning strategy in variations of Kayles

Finding a winning strategy in variations of Kayles Finding a winning strategy in variations of Kayles Simon Prins ICA-3582809 Utrecht University, The Netherlands July 15, 2015 Abstract Kayles is a two player game played on a graph. The game can be dened

More information

Solution Methodologies for. the Smallest Enclosing Circle Problem

Solution Methodologies for. the Smallest Enclosing Circle Problem Solution Methodologies for the Smallest Enclosing Circle Problem Sheng Xu 1 Robert M. Freund 2 Jie Sun 3 Tribute. We would like to dedicate this paper to Elijah Polak. Professor Polak has made substantial

More information

MTAEA Convexity and Quasiconvexity

MTAEA Convexity and Quasiconvexity School of Economics, Australian National University February 19, 2010 Convex Combinations and Convex Sets. Definition. Given any finite collection of points x 1,..., x m R n, a point z R n is said to be

More information

Global Minimization via Piecewise-Linear Underestimation

Global Minimization via Piecewise-Linear Underestimation Journal of Global Optimization,, 1 9 (2004) c 2004 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Global Minimization via Piecewise-Linear Underestimation O. L. MANGASARIAN olvi@cs.wisc.edu

More information

H = {(1,0,0,...),(0,1,0,0,...),(0,0,1,0,0,...),...}.

H = {(1,0,0,...),(0,1,0,0,...),(0,0,1,0,0,...),...}. II.4. Compactness 1 II.4. Compactness Note. Conway states on page 20 that the concept of compactness is an extension of benefits of finiteness to infinite sets. I often state this idea as: Compact sets

More information

A new mini-max, constrained optimization method for solving worst case problems

A new mini-max, constrained optimization method for solving worst case problems Carnegie Mellon University Research Showcase @ CMU Department of Electrical and Computer Engineering Carnegie Institute of Technology 1979 A new mini-max, constrained optimization method for solving worst

More information

Lecture 4: Convexity

Lecture 4: Convexity 10-725: Convex Optimization Fall 2013 Lecture 4: Convexity Lecturer: Barnabás Póczos Scribes: Jessica Chemali, David Fouhey, Yuxiong Wang Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

2. Mathematical Notation Our mathematical notation follows Dijkstra [5]. Quantication over a dummy variable x is written (Q x : R:x : P:x). Q is the q

2. Mathematical Notation Our mathematical notation follows Dijkstra [5]. Quantication over a dummy variable x is written (Q x : R:x : P:x). Q is the q Parallel Processing Letters c World Scientic Publishing Company ON THE SPACE-TIME MAPPING OF WHILE-LOOPS MARTIN GRIEBL and CHRISTIAN LENGAUER Fakultat fur Mathematik und Informatik Universitat Passau D{94030

More information

Revisiting the Upper Bounding Process in a Safe Branch and Bound Algorithm

Revisiting the Upper Bounding Process in a Safe Branch and Bound Algorithm Revisiting the Upper Bounding Process in a Safe Branch and Bound Algorithm Alexandre Goldsztejn 1, Yahia Lebbah 2,3, Claude Michel 3, and Michel Rueher 3 1 CNRS / Université de Nantes 2, rue de la Houssinière,

More information

Column-Action Methods in Image Reconstruction

Column-Action Methods in Image Reconstruction Column-Action Methods in Image Reconstruction Per Christian Hansen joint work with Tommy Elfving Touraj Nikazad Overview of Talk Part 1: the classical row-action method = ART The advantage of algebraic

More information

On Unbounded Tolerable Solution Sets

On Unbounded Tolerable Solution Sets Reliable Computing (2005) 11: 425 432 DOI: 10.1007/s11155-005-0049-9 c Springer 2005 On Unbounded Tolerable Solution Sets IRENE A. SHARAYA Institute of Computational Technologies, 6, Acad. Lavrentiev av.,

More information

Open and Closed Sets

Open and Closed Sets Open and Closed Sets Definition: A subset S of a metric space (X, d) is open if it contains an open ball about each of its points i.e., if x S : ɛ > 0 : B(x, ɛ) S. (1) Theorem: (O1) and X are open sets.

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

On Soft Topological Linear Spaces

On Soft Topological Linear Spaces Republic of Iraq Ministry of Higher Education and Scientific Research University of AL-Qadisiyah College of Computer Science and Formation Technology Department of Mathematics On Soft Topological Linear

More information

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2 (1 Given the following system of linear equations, which depends on a parameter a R, x + 2y 3z = 4 3x y + 5z = 2 4x + y + (a 2 14z = a + 2 (a Classify the system of equations depending on the values of

More information

On Fuzzy Topological Spaces Involving Boolean Algebraic Structures

On Fuzzy Topological Spaces Involving Boolean Algebraic Structures Journal of mathematics and computer Science 15 (2015) 252-260 On Fuzzy Topological Spaces Involving Boolean Algebraic Structures P.K. Sharma Post Graduate Department of Mathematics, D.A.V. College, Jalandhar

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

The Fibonacci hypercube

The Fibonacci hypercube AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 40 (2008), Pages 187 196 The Fibonacci hypercube Fred J. Rispoli Department of Mathematics and Computer Science Dowling College, Oakdale, NY 11769 U.S.A. Steven

More information

Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method

Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method 11 Appendix C: Steepest Descent / Gradient Search Method In the module on Problem Discretization

More information

not made or distributed for profit or commercial advantage and that copies

not made or distributed for profit or commercial advantage and that copies Copyright 2014, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Convexity and Optimization

Convexity and Optimization Convexity and Optimization Richard Lusby Department of Management Engineering Technical University of Denmark Today s Material Extrema Convex Function Convex Sets Other Convexity Concepts Unconstrained

More information

Some fixed fuzzy point results using Hausdorff metric in fuzzy metric spaces

Some fixed fuzzy point results using Hausdorff metric in fuzzy metric spaces Annals of Fuzzy Mathematics and Informatics Volume 13, No 5, (May 017), pp 641 650 ISSN: 093 9310 (print version) ISSN: 87 635 (electronic version) http://wwwafmiorkr @FMI c Kyung Moon Sa Co http://wwwkyungmooncom

More information

Math 302 Introduction to Proofs via Number Theory. Robert Jewett (with small modifications by B. Ćurgus)

Math 302 Introduction to Proofs via Number Theory. Robert Jewett (with small modifications by B. Ćurgus) Math 30 Introduction to Proofs via Number Theory Robert Jewett (with small modifications by B. Ćurgus) March 30, 009 Contents 1 The Integers 3 1.1 Axioms of Z...................................... 3 1.

More information

Journal of mathematics and computer science 13 (2014),

Journal of mathematics and computer science 13 (2014), Journal of mathematics and computer science 13 (2014), 231-237 Interval Interpolation by Newton's Divided Differences Ali Salimi Shamloo Parisa Hajagharezalou Department of Mathematics, Shabestar Branch,

More information

On Rainbow Cycles in Edge Colored Complete Graphs. S. Akbari, O. Etesami, H. Mahini, M. Mahmoody. Abstract

On Rainbow Cycles in Edge Colored Complete Graphs. S. Akbari, O. Etesami, H. Mahini, M. Mahmoody. Abstract On Rainbow Cycles in Edge Colored Complete Graphs S. Akbari, O. Etesami, H. Mahini, M. Mahmoody Abstract In this paper we consider optimal edge colored complete graphs. We show that in any optimal edge

More information

Locally convex topological vector spaces

Locally convex topological vector spaces Chapter 4 Locally convex topological vector spaces 4.1 Definition by neighbourhoods Let us start this section by briefly recalling some basic properties of convex subsets of a vector space over K (where

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Lecture: Convex Sets

Lecture: Convex Sets /24 Lecture: Convex Sets http://bicmr.pku.edu.cn/~wenzw/opt-27-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/24 affine and convex sets some

More information