Mini-Max Type Robust Optimal Design Combined with Function Regularization

Size: px
Start display at page:

Download "Mini-Max Type Robust Optimal Design Combined with Function Regularization"

Transcription

1 6 th World Congresses of Structural and Multidisciplinary Optimization Rio de Janeiro, 30 May - 03 June 2005, Brazil Mini-Max Type Robust Optimal Design Combined with Function Regularization Noriyasu Hirokawa 1, Kikuo Fujita 2 and Fam Chiou Tzi 3 1 Department of Mechanical Engineering and Biomimetics, Kinki University, Wakayama, Japan, hirokawa@waka.kindai.ac.jp 2 Department of Mechanical Engineering, Osaka University, Osaka, Japan, fujita@mech.eng.osaka-u.ac.jp 3 Department of Mechanical Engineering, Osaka University, Osaka, Japan 1. Abstract This paper proposes a mini-max type robust optimal design and its enhancement by using function regularization. A mini-max type robust optimal design seeks the optimal solution based on the bounding points of the objective function and constraints within a distribution region. In order to obtain the bounding point strictly and efficiently, a distribution region which generally forms an oblique ellipsoid in the design space is diagonalized and isoparameterized into a hyper sphere and functions are approximated with quadratic response surfaces. Further, the accuracy of the bounding points are enhanced with a function regularization which transforms function values at sample points to construct more precise approximations. The effectiveness is ascertained through applications to two example problems. 2. Keywords: Robust Optimal Design, Intermediate Model, Mini-Max Type Formulation, Quadratic Response Surface, Function Regularization 3. Introduction Robust optimal design seeks the optimal solution considering the robustness of a design against manufacturing error, environmental change, wear-out and so forth. It optimizes a solution by using an intermediate model which approximates the objective function and constraints within a distribution region of the design variables and design parameters, and evaluates a solution with the approximation. Various types of intermediate models have been proposed for robust optimal design, e.g. worst-case analysis (WCA) (e.g. [1]), statistic analysis (SA) (e.g. [1]), design variable hyper sphere (DVHS) [2], most probable point (MPP) [3], a mini-max type robust optimal design [4] and so forth. These models assume a distribution region with a hyper rectangular parallelepiped, a hyper ellipsoid etc., approximate a function with a first-order or second-order Taylor series or a quadratic polynomial etc. within it and evaluate the feasibility, optimality and sensitivity of a solution with a worst-case, statistic features etc. They are developed from relatively simple ones toward complicated ones to obtain an accurate solution. A mini-max type robust optimal design [4] evaluates a solution with the bounding points of the functions within a distribution region directly to strictly guarantee the robustness of a design. For obtaining the bounding point strictly and efficiently, it estimates the bounding points by transforming the distribution region into a hyper sphere and approximating functions with quadratic polynomials. Further, in order to enhance the quality of the optimal design, this paper proposes a function regularization for constructing more precise approximations. This paper describes its concept and algorithm, and shows the applications to two numerical example problems to ascertain its effectiveness. 4. Robust optimal design and its development 4.1. Definition of optimal design problem Before discussing a robust optimal design, this paper assumes the formulation of the nominal optimal design problem as follows: find x = [ ] T x 1, x 2,, x nx that minimizes f (x, p) subject to g k (x, p) 0 (k = 1, 2,,n g ) The nominal optimal design seeks the best values of the design variables x that minimizes the objective function f (x, p) within the feasible region assigned by the constraints g k (x, p) under the given design parameters p = [ ] T [ p1, p 2,, p np. For a convenience, a notation of v = x T, p T ] [ ] T T = v 1, v 2,, v j,, v nv is introduced, where n v = n x + n p, v j = x j for 1 j n x and v j = p j nx for n x + 1 j n x + n p. (1) 1

2 Approximation Linear Quadratic Evaluation Worst-case Weighted-sum Evaluation Worst-case Weighted-sum WCA SI SA Mini-Max Type DVHS, MPP, etc. Hyper rectanglar parallelepiped Hyper ellipsoid Oblique hyper ellipsoid Variance Variance+Covariance Distribution region Figure 1: Development of intermediate model for robust optimal design 4.2. Intermediate models in robust optimal design A robust optimal design optimizes a solution evaluating its optimality, feasibility and sensitivity based on the variation of the functions within a distribution region. While its native meaning includes integral and probability calculation over design space, they are substituted with an intermediate model. In early days intermediate models assume an independent variation of variables and approximate the functions with linear expression. Worst-case analysis (WCA) and statistic analysis (SA) [1] are typical ones. The former assumes a distribution region with a hyper rectangular parallelepiped which has vertices at ( v o1 ± v 1, v o2 ± v 2, ) T, v ± v on v n v, where vo j is the average of v j and v j is its deviation, and approximates a constraint with a n v g linear expression. And it evaluates the feasibility with g i (v o ) + i v j=1 v j 0. The latter assumes a distribution region with a hyper ellipsoid which has its center point at v o = v o1, v o2,, v on j ( ) T v and radius κσ j for each axis, where κ is a parameter for the size of the variation and σ j is the standard deviation of v j. It approximates a constraints with a linear expression and evaluates the feasibility with g i (v o ) + κσ gi 0, where σ gi is the standard deviation of g i (v) under the linear approximation. Then, more precise intermediate models were proposed by introducing correlative relationship among variables or by approximating the functions using quadratic expressions. Figure 1 shows the development of intermediate models for a robust optimal design under the viewpoints of assumed distribution region (horizontal axis), evaluation of robustness (right part of the vertical axis) and approximation of functions (left part of the vertical axis). In the figure, covariance among variables are considered as well as their variances, which corresponds to the movement from Variance to Variance + Covariance in the horizontal axis. Worst-case tends to be used rather than weighted-sum for evaluating the robustness of a solution strictly, which corresponds to Weighted-sum and Worst-case in the vertical axis. And a quadratic polynomial is used rather than a linear expression for approximating a function with high fidelity, which corresponds to the movement from Linear to Quadratic in the vertical axis Mini-max type robust optimal design In order to evaluate the robustness of a solution strictly, the intermediate model proposed in this paper combines the developments in the three directions: assumption of a distribution region, approximation of the functions and evaluation of a solution. Namely, it evaluates the robustness of a solution with the bounding points of the objective function and constraints obtained by using their quadratic approximations considering the covariance among variables. This means that the proposed intermediate model locates in the upper-right part in Fig. 1. The proposed method formulates the robust optimal design with the following mini-max type optimization 2

3 (a) Nominal optimal design: v h (b) (c) Robust optimal design: Robust optimal design with function regularization: v v f(x) h f(x) h h ζ () f(x) h h f(x) ζ h ζ h (v) ζ Figure 2: Framework for evaluating a solution in optimal design problem: find that minimizes subject to x max v R(v f (v) o,σ,α ) max v R(v g k (v) 0 (k = 1, 2,,n g ) o,σ,α ) where R(v o, Σ, α ) is a distribution region of v. In the formulation, v vary with the mean vector v o = [ Exp [ ] [ ] ] T ] v 1, Exp v2,, Exp[vnv ], where Exp [v j is the expectation of v j, and the variance-covariance matrix Σ = Exp (v v o )(v v o ) T σ 21 σ 11 σ 12 σ [ ] 1nv σ 22 σ 2nv =......, where σ i j is a covariance between v i and v j. σ nv 1 σ nv 2 σ nv n v The distribution region generally forms an n v -dimensional ellipsoid which is formulated as follows: ( v v o ) T Σ 1 ( v v o ) χ 2 (n v, α) (3) where χ 2 (n v, α) is the value of chi-square distribution with n v -degrees of freedom that leaves probability α in the upper tail. The mini-max type robust optimal design problem includes a bounding point search problem of each function. It is formulated as a maximization problem as follows: find v that maximizes (4) subject to v R(v o, Σ, α ) where h( ) is a general expression of the objective function f ( ) or a constraint g k ( ). It is indispensable to obtain the bounding point efficiently and strictly to obtain the the robust optimal solution. Therefore, the proposed method transforms the distribution region into a hyper sphere and approximates a function with a quadratic response surface Enhancement of mini-max type robust optimal design with function regularization Since the fidelity of the obtained solution depends on the fidelity of the approximated functions in robust optimal design, this paper introduces a function regularization to obtain the bounding points strictly for a highly nonlinear function. Figure 2 shows the framework of a robust optimal design with function regularization in comparison with other optimization schema. A nominal optimal design evaluates a solution with a function value as shown in Fig. 2 (a). And an ordinary robust optimal design evaluates a solution with an approximated function which is constructed based on a set of sample points as shown in Fig. 2 (b). A function regularization transforms function values at sample points through a regularization filter h ζ so that the transformed values h ζ () can build a precise approximation h ζ (v) as shown in Fig. 2 (c). Figure 3 illustrates the effect of function regularization for a quadratic response surface approximation. In Fig. 3 (a) and (b), the horizontal axis is a variable v and vertical one is a function value. Figure 3 (a) shows the original function and its quadratic approximation constructed with a set of sample points. In the (2) 3

4 Sample point Boundary of feasible region hζ() hζ(v) Sample point Regularized sample point Original function Boundary of feasible region Quadratic approximation of original function Original function O Approximated bounding point True bounding point v O Quadratic approximation of regularized function hζ(v) Regularized function hζ() Approximated bounding point v True bounding point (a) Intermediate model of original function (b) Intermediate model of regularized function Figure 3: Effect of function regularization case that the original function is highly nonlinear, an approximation is different from the original function. This situation causes an inaccurate robust optimal solution, especially if the approximated bounding point of a constraint is far from that of the true one. On the other hand, Fig. 3 (b) shows the original function, its regularized one h ζ () and its quadratic approximation of the regularized one h ζ (v). h ζ (v) has high fidelity and the approximated bounding point obtained by using h ζ (v) is expected to be closer to the true one. 5. Algorithm and implementation of mini-max type robust optimal design with function regularization 5.1. Outline of the algorithm The concept of the proposed robust optimal design is itemized as follows: (i) In order to evaluate the feasibility and optimality of a solution precisely, a distribution region which generally forms an oblique ellipsoid is transformed to a hyper sphere and each function is approximated with a quadratic response surface. (ii) A regularization filter transforms function values at sample points so as that the approximated function fits the sample points. (iii) A regularization filter must be monotonically increasing against the function value within an assumed region, which is called regularization region, so as that one-to-one correspondence relationship is guaranteed. (iv) A regularization region should be narrow enough in the later stages of the optimization in order to precisely obtain the optimal solution, and it should be wide enough so as to construct an effective filter which is valid in searching wide region in the earlier stages of the optimization. Figure 4 shows the whole algorithm of the proposed method. First, the algorithm initializes a regularization region which center point is at the initial solution and constructs a regularization filter for each function based on the function values at sample points (from (a) to (d) in Fig. 4). Then, it optimize a solution by iteratively updating a tentative solution based on the function values at their bounding points which are obtained with quadratic polynomials built by the regularized function values at sample points (from (e) to (l) in Fig. 4). The procedure are repeated predefined times reducing the size of a regularization region Representation of distribution region with hyper sphere For solving a bounding point search problem strictly and efficiently, the distribution region is transformed into a hyper sphere by the technique of design variation hyper sphere (DVHS) [2, 4]. First, Eq. (3) is reformed as follows by a diagonalization operation: v T Σ 1 v χ 2 (n v, α) (5) 4

5 Optimize the critical optimality over the design space Start (a) Build suitabilization filters (e) Generate a tentative design (f) Test the tentative design (g) Regularize a region (h) Build quadratic polynomial approximations Build regularization filters over regularization region (b) Initialize or shrink suitabilization region (c) Execute sampling over regularization region (d) Determine filter parameters Search the extremes of functions within a distribution region (j) Discriminate the type of an extreme in the distribution region (i) Calculate its optimality Converged? (k) Calculate the inner extreme analytically (l) Search the boundary extreme numerically Satisfy terminal condition? End The feasibility of a distribution region and the inferior extreme of the nominal objective within it Figure 4: Algorithm of mini-max type robust optimal design with function regularization where v = T 1 ( v v o ) and Σ = diag { } λ 1,,λ nv by using T1 = [ ] T e 1, e 2,, e nv, in which e j is a unit eigenvector of matrix Σ, and λ j is an eigenvalue of matrix Σ. Then, Eq. (5) is reformed by an isoparameterization matrix operation: v T v χ 2 (n v, α) (6) ( where v is determined by v = T 2 v 1, by using T 2 = diag,, 1 λ1 λn v ). These equations transform the distribution region from an oblique hyper ellipsoid (Fig. 5 (a)) to an orthogonal hyper ellipsoid which has half axes λ j χ 2 (n v, α) (Fig. 5 (b)) and then transform it to a hyper sphere which has radius χ 2 (n v, α) (Fig. 5 (c)) Quadratic function approximation The quadratic polynomial approximation of each function is generated by the least square regression method around v o. The method approximates each function so as that the approximation is exactly matched with the original function at v o, which corresponds to v o. Since engineering optimal design problems tend to have the optimal solution on the boundary of the feasible region, a set of sample points are arranged on the boundary of the hyper sphere isoparametrically to approximate with high fidelity on the boundary. Finally, the bounding point search problem is formulated as follows: find v that maximizes h(v ) = h(v o) + where β i and β i j are unknown coefficients. n v i=1 subject to v T v χ 2 (n v,α ) β i v n v n v i + i=1 j=i β i j v i v j (7) 5

6 (a) v 2 v ο2 + κ σ 2 v ο2 v ο2 κ σ 2 Ο v 2 oblique hyper ellipsoid v 1 v ο1 κ σ 1 v ο1 v ο1 + κ σ 1 v 1 (b) v 2 λ2 χ2(nv, α) λ1 χ2(nv, α) ( v - vo ) T -1 Σ ( v - vo ) v v χ2(nv, α) χ2(nv, α) v T -1 Σ v χ2(nv, α) diagonalize : v = T 1 ( v - vo ) v 1 orthogonal hyper ellipsoid (c) hyper sphere isoparameterize : v" = T 2 v Figure 5: Coordinate transformation on the design space T v 2 χ2(nv, α) χ2(nv, α) v Search of the bounding point of a function The bounding point search problem Eq. (7) is a nonlinear constrained optimization problem and has either inner or boundary optimum. The proposed method firstly discriminates the type of bounding point, and then obtains the bounding point analytically or numerically. (1) Inner optimum When the objective function is upward convex, that is, 2 h(v ) < 0 and v = [ 2 h(v o) ] 1 h(v o), the pole of h(v ), is inside of the distribution region, Eq. (7) has the inner optimum v. (2) Boundary optimum Otherwise, the bounding point is on the boundary of the hyper sphere. Since several local optima may exist on the boundary, the objective function is converted to the following form so as that the function ĥ(v ) is upward convex and ĥ(v ) has the unique optimum. ĥ(v ) = h(v ) 1 2 s v T I v = h(v o) + h(v o)v v T ( 2 h(v o) s I ) v (8) where I is an n v -dimensional identify matrix. By determinating an appropriate s, h(v ) can be converted to an upward convex type and has the maximum point within the hyper sphere on its boundary. This is because, since 1 2 s v T I v takes the same value on the boundary of a hyper sphere, ĥ(v ) takes the maximum where h(v ) takes the maximum on boundary of the hyper sphere. The condition is summarized by the following equations: ( 2 h(v o) s I ) v = h(v o) (9) v = χ 2 (n v,α ) (10) s > 0 (11) h(v o) v s I 0 (12) where, is a Euclid norm. Eq. (9) through Eq. (11) mean that the stationary point is on the boundary of the hyper sphere and Eq. (12) means that the matrix h(v o) v s I is semi-negative definite so as that ĥ(v ) is upward convex Adjustment of regularization region For defining a regularization filter, a regularization region is set as a hyper rectangular parallelepiped which has its center point at a tentative solution v. The size of the region is reduced over K iterations. At k-th ( k = 1,2,,K ) iteration, its upper and lower bounds are set to v i ±C (k) κσ i (i = 1, 2,, n v ), where σ i is the standard deviation of v i, κ is the factor for the size of the distribution region and C (k) is the factor for the size of the regularization region. C (k) is reduced by multiplying the factor C (k+1) /C (k) = K C (0) at every iteration. At the final iteration, K-th iteration, C (K) = 1 and the regularization region is circumscribed with the distribution region. 6

7 5.6. Construction of regularization filter Fourier series is used for building a regularization filer with a small number of parameters. In order to guarantee that a regularized filter is monotonically increasing, regularization filter is configured as a superposition of linear transformation and Fourier series which includes only sine curves. A regularization filter forms as follows: ) h ζ (α S ζ,h S () = h S () + N n=1 b n sinnπ h S () (13) where h S () is a linear function which transforms the minimum h min and the maximum h max of in the regularization region to 0 and 1, respectively. N is the order of Fourier series, and α ζ = [ b 1, b 2,,b N ] T are its ) ( )) coefficients. Under this expression, a regularization filter is defined as h ζ (α ζ, = h 1 S (h ζ α S ζ,h S (). In order to construct a quadratic polynomial h ζ (α ζ,v) which has high fidelity against the regularized function, is determined by solving the following mathematical programming problem: α ζ find that minimizes E ζ = 1 n ζ subject to α ζ = [ b 1, b 2,,b N ] T n ζ k=1 [ h 1 S ( )) ] 2 (h ζ α S ζ,h S () h ζ (α ζ,v k ) h ζ (α S ζ,h S ()) 0 over 0 h h S () S () 1 where n ζ is the number of sample points located randomly within a regularization region. The constraint means that a regularization filter must be monotonically increasing within a range of 0 h S () 1, which is replaced with an algebraic form in computation. Since it is difficult to obtain accurate h max and h min values in the regularization region, they are substituted with h max = hmax s + h (hmax s hmin s ), h min = h min s h (h max s hmin s ), where hmax s and hmin s is the maximum and minimum function values at all sample points, respectively, where h ( 0 < h < 0.5 ) is the margins against them. Further, in the case that is a constraint, the regularization filter is compensated so as that the boundary of the constraint, namely the points at which function value is 0, must not move before and after regularization. A regularization filter for a constraint is redefined as follows: ) ( )) h ζ (α ζ,h 0, = h 1 S (h ζ α S ζ,h S () h 0 (15) where h 0 is the parameter for compensation. It is determined as follows: (i): if hmin s 0 h max s, h 0 is determined so as that the points at which the function takes 0 matches before and after regularization, (ii): if hmin s > 0, h 0 is determined so as that the points at which the function takes hmin s matches before and after regularization, (iii): otherwise, namely hmax s < 0, h 0 is determined so as that the points at which the function takes hmax s matches before and after regularization. 6. Numerical examples 6.1. Two-dimensional optimization problem The proposed mini-max type formulation for a robust optimal design is applied to a two-dimensional optimization problem in comparison with other conventional robust optimal design to show the its effectiveness visually. The optimization problem is formulated as follows: find x = [ ] T x 1, x 2 that minimizes f (x) = x 3 1 sin(x 1 + 4) + 10x x 1 + 5x 1 x 2 + 2x x subject to g 1 (x) = x x 1 x 1 sinx 1 + x g 2 (x) = log(0.1x ) + x 2 e x 1 +3x x [ Standard deviations ] of the design variables are σ = [ 0.13, 0.13 ] T and their variance-covariance matrix is Σ = Figure 7 shows the contour plots of the objective function and the constraints (14) (16) 7

8 x2 0.0 Nominal optimal design Inferior extreme of the nominal -1.0 f ( x ) objective Distribution region g1 ( x ) Feasible region g2 ( x ) x1 Bounding point of g2 0 Bounding point of g1 0 Robust optimal solution by the proposed formulation Figure 6: Contour plot of two-dimensional example problem Table 1: Comparison of robust optimality formulations Case 1 Case 2 Case 3 Case 4 Case 5 Nominal Method of feasibility analysis SQ SL SQ SL SQ optimi- Method of optimality & sensitivity analysis SQ SQ WS WS SQ zation Method for building quadratic polynomials R R R R D Optimized design x variables x Extremes of Values in g 1 (x ) functions formulated g 2 (x ) within the model f (x ) distributed Values g 1 (x ) region in real g 2 (x ) behavior f (x ) Abbr.: SL : Strict analysis with linear approximation, R: Least square regression as response surface, SQ : Strict analysis with quadratic polynomial approximation, D: Direct differentiation at the point. WS: Weighted-sum of mean value and standard deviation of the nominal objective Table 1 shows the optimization results. In the table, Case 1 is the proposed method, Case 2 evaluates the feasibility with the bounding point under linear approximations of constraints (SL), Case 3 evaluates the optimality and sensitivity by the weighted-sum of the mean and standard deviation of the objective function (WS), Case 4 is a combination of Case 2 and Case 3. Case 5 is similar with Case 1 but it approximates the functions with second-order Taylor series using the direct differentiation at the solution instead of response surfaces. The difference on the values of g 1 (x ) and g 2 (x ) in real behavior between Case 1 and Case 2, and between Case 3 and Case 4 indicates that strict analysis with quadratic polynomial approximation (SQ) is superior to one with linear approximation (SL) for feasibility analysis. While this difference is caused by nonlinearity of g 2 (x) around g 2 (x) = 0, which is observed in Fig. 7, this result highlights the advantage of the proposed formulation in the cases in which constraints are nonlinear. The difference on the values of f (x ) in real behavior between Case 1 and Case 3, and between Case 2 and Case 4 indicates that strict analysis with quadratic polynomial approximation (SQ) is superior to one with the conventional method with the weighted-sum of mean value and standard deviation for optimality and sensitivity analysis (WS). While this difference is caused by nonlinearity of f (x), this result highlights the advantage of the proposed formulation in the cases in which the nominal objective is nonlinear. The comparison between Case 1 and Case 5 indicates that local approximation of functions performs better that exact values of functions at the point for robust design optimization. This is obviously because the critical 8

9 x Distribution region Robust optimal solution Feasible region x1 Bounding point of g2(x) 0 Bounding point of g1(x) 0 f ( x ) g1 ( x ) g2 ( x ) Figure 7: Contour plots of two-dimensional highly nonlinear example problem points on strict robust optimality are on the boundary of a distributed region rather than its center point. In summary, the numerical result on a two-dimensional example validates the effectiveness of the proposed mini-max type formulation in seeking the strictly robust optimal design Two-dimensional optimization problem with highly nonlinear functions The proposed mini-max type robust optimal design with function regularization is applied to a two-dimensional highly nonlinear optimization problem formulated below: find that minimizes x = [ x 1, x 2 ] T f (x) = 2 subject to g 1 (x) = 40 (x 1 3) 2 + (3 x 2 ) 2 50 (x ) 2 (x 2 5) 4 0 g 2 (x) = log(0.1x ) + x 2 e (x x 2 4) x [ Standard deviations of ] the design variables are σ = [ 0.13, 0.13 ] T and their variance-covariance matrix is Σ = Figure 7 shows the contour plots of the objective function and the constraints. g (x) is highly nonlinear as shown in the figure. In the execution, the number of iteration K = 5, the initial factor for regularization region C (0) = 3.0, the order of Fourier series in regularization filter N = 2, the number of sample points n ζ = 400, the margins for compensating h max and h min by hmax s and hmin s, h = 0.33 are used. Table 2 shows the comparison of optimization results to ascertain the effectiveness of function regularization. In the table, the second and the third columns are the results obtained without function regularization while the forth and fifth columns are ones with function regularization. And the second and the forth columns are the function values at the optimum that are calculated with quadratic approximations while the third and the fifth columns are the function values at the optimum that are calculated with exact functions. The result shows that the highly nonlinear constraint g 2 (v) at the optimal solution is less violated with function regularization than that without function regularization. Figure 8 (a) and (b) show the quadratic approximation g 2 (v) and g 2ζ (α ζ,g 20,g 2 (v)) in comparison with their original function g 2 (v), respectively. The differences of contours shown in Fig. 8 (b) is smaller than that of Fig. 8 (a). It is ascertained that function regularization enhances the approximation. 7. Concluding remarks This paper proposed a mini-max type robust optimal design and its enhancement with a function regularization. The proposed method transforms a distribution region into a hyper sphere and approximates the functions with quadratic polynomials. This procedure enables to obtain the bounding points strictly and efficiently. Further, function regularization makes it possible to build a more precise approximation even if a function is highly nonlinear (17) 9

10 Table 2: Comparison of robust optimal design with/without function regularization Robust optimum without function regularization Vaules in Vaules in formulated real model behavior Robust optimum with function regularization Vaules in Vaules in formulated real model behavior x x g 1 (x ) g 2 (x ) f (x ) x g2=0 g2= x1 x g2=0 g2 =0 ζ (a) Original function (b) Regularized function x1 Original or regularized function Quadratic approximation Regularization region Distribution region Optimal design Figure 8: Approximation of g 2 (v) at the final iteration and the form of an approximation is different from that of the original one. This paper introduced the concept of function regulation and its implementation to a mini-max type robust optimal design. Finally, its validity was ascertained through two example problems. However, a function regularization requires much computational cost due to mainly sampling cost for building regularization filters. Reducing the computational cost is our future work. It is expected that a cumulative function approximation [5] is a great help for reducing the computational cost [6]. 7. References [1] Parkinson, A., Robust Mechanical Design Using Engineering Models, Transactions of the ASME, Journal of Mechanical Design, 1995, 117, [2] Zhu, J. and Ting K.-L., Performance Distribution Analysis and Robust Design, Transactions of the ASME, Journal of Mechanical Design, 2001, 123, [3] Du, X. and Chen, W., Towards a Better Understanding of Modeling Feasibility Robustness in Engineering Design, Transactions of the ASME, Journal of Mechanical Design, 2000, 122, [4] Hirokawa, N. and Fujita, K., Mini-max Type Formulation of Strict Robust Design Optimization under Correlative Variation, Proceedings of the 2002 ASME Design Engineering Technical Conferences, 2002, Paper Number DETC2002/DAC [5] Hirokawa, N., Fujita, K. and Iwase, T., Voronoi Diagram Based Blending of Quadratic Response Surfaces for Cumulative Global Optimization, Proceedings of 9th AIAA/ISSMO Symposium on Multi-Disciplinary Analysis and Optimization, 2002, Paper Number AIAA [6] Hirokawa, N., Fujita, K. and Ushiro, T., Computation Cost Saving of Robust Optimal Design by Cumulative Function Approximation, Proceedings of the Third China-Japan-Korea Joint Symposium on Optimization of Structural and Mechanical Systems (CJK-OSM3), 2004,

VORONOI DIAGRAM BASED BLENDING OF QUADRATIC RESPONSE SURFACES FOR CUMULATIVE GLOBAL OPTIMIZATION

VORONOI DIAGRAM BASED BLENDING OF QUADRATIC RESPONSE SURFACES FOR CUMULATIVE GLOBAL OPTIMIZATION VORONOI DIAGRAM BASED BLENDING OF QUADRATIC RESPONSE SURFACES FOR CUMULATIVE GLOBAL OPTIMIZATION Noriyasu Hirokawa, Kikuo Fujita, Teppei Iwase Department of Computer-Controlled Mechanical Systems Osaka

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

Spatial Outlier Detection

Spatial Outlier Detection Spatial Outlier Detection Chang-Tien Lu Department of Computer Science Northern Virginia Center Virginia Tech Joint work with Dechang Chen, Yufeng Kou, Jiang Zhao 1 Spatial Outlier A spatial data point

More information

Optimization. Industrial AI Lab.

Optimization. Industrial AI Lab. Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)

More information

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in

More information

Face Recognition Using Long Haar-like Filters

Face Recognition Using Long Haar-like Filters Face Recognition Using Long Haar-like Filters Y. Higashijima 1, S. Takano 1, and K. Niijima 1 1 Department of Informatics, Kyushu University, Japan. Email: {y-higasi, takano, niijima}@i.kyushu-u.ac.jp

More information

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Polymath 6. Overview

Polymath 6. Overview Polymath 6 Overview Main Polymath Menu LEQ: Linear Equations Solver. Enter (in matrix form) and solve a new system of simultaneous linear equations. NLE: Nonlinear Equations Solver. Enter and solve a new

More information

SYSTEMS OF NONLINEAR EQUATIONS

SYSTEMS OF NONLINEAR EQUATIONS SYSTEMS OF NONLINEAR EQUATIONS Widely used in the mathematical modeling of real world phenomena. We introduce some numerical methods for their solution. For better intuition, we examine systems of two

More information

Polynomial and Rational Functions. Copyright Cengage Learning. All rights reserved.

Polynomial and Rational Functions. Copyright Cengage Learning. All rights reserved. 2 Polynomial and Rational Functions Copyright Cengage Learning. All rights reserved. 2.7 Graphs of Rational Functions Copyright Cengage Learning. All rights reserved. What You Should Learn Analyze and

More information

Generating random samples from user-defined distributions

Generating random samples from user-defined distributions The Stata Journal (2011) 11, Number 2, pp. 299 304 Generating random samples from user-defined distributions Katarína Lukácsy Central European University Budapest, Hungary lukacsy katarina@phd.ceu.hu Abstract.

More information

Generative and discriminative classification techniques

Generative and discriminative classification techniques Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

UNIT 3 EXPRESSIONS AND EQUATIONS Lesson 3: Creating Quadratic Equations in Two or More Variables

UNIT 3 EXPRESSIONS AND EQUATIONS Lesson 3: Creating Quadratic Equations in Two or More Variables Guided Practice Example 1 Find the y-intercept and vertex of the function f(x) = 2x 2 + x + 3. Determine whether the vertex is a minimum or maximum point on the graph. 1. Determine the y-intercept. The

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Optimizations and Lagrange Multiplier Method

Optimizations and Lagrange Multiplier Method Introduction Applications Goal and Objectives Reflection Questions Once an objective of any real world application is well specified as a function of its control variables, which may subject to a certain

More information

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University DS 4400 Machine Learning and Data Mining I Alina Oprea Associate Professor, CCIS Northeastern University September 20 2018 Review Solution for multiple linear regression can be computed in closed form

More information

Linear Methods for Regression and Shrinkage Methods

Linear Methods for Regression and Shrinkage Methods Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Search direction improvement for gradient-based optimization problems

Search direction improvement for gradient-based optimization problems Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Section 18-1: Graphical Representation of Linear Equations and Functions

Section 18-1: Graphical Representation of Linear Equations and Functions Section 18-1: Graphical Representation of Linear Equations and Functions Prepare a table of solutions and locate the solutions on a coordinate system: f(x) = 2x 5 Learning Outcome 2 Write x + 3 = 5 as

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Graphing Techniques. Domain (, ) Range (, ) Squaring Function f(x) = x 2 Domain (, ) Range [, ) f( x) = x 2

Graphing Techniques. Domain (, ) Range (, ) Squaring Function f(x) = x 2 Domain (, ) Range [, ) f( x) = x 2 Graphing Techniques In this chapter, we will take our knowledge of graphs of basic functions and expand our ability to graph polynomial and rational functions using common sense, zeros, y-intercepts, stretching

More information

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers 3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers Prof. Tesler Math 20C Fall 2018 Prof. Tesler 3.3 3.4 Optimization Math 20C / Fall 2018 1 / 56 Optimizing y = f (x) In Math 20A, we

More information

Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation

Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation Yasuhito Sano and Hajime Kita 2 Interdisciplinary Graduate School of Science and Engineering,

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

[10] J. U. Turner, Tolerances in Computer-Aided Geometric Design, Ph.D. Thesis, Rensselaer Polytechnic Institute, 1987.

[10] J. U. Turner, Tolerances in Computer-Aided Geometric Design, Ph.D. Thesis, Rensselaer Polytechnic Institute, 1987. Automation, pp. 927-932, New Mexico, USA, 1997. [8] A. A. G. Requicha, Representation of Tolerances in Solid Modeling: Issues and Alternative Approaches, in Solid Modeling by Computers: from Theory to

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

MEI Desmos Tasks for AS Pure

MEI Desmos Tasks for AS Pure Task 1: Coordinate Geometry Intersection of a line and a curve 1. Add a quadratic curve, e.g. y = x² 4x + 1 2. Add a line, e.g. y = x 3 3. Select the points of intersection of the line and the curve. What

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions

More information

DETC APPROXIMATE MOTION SYNTHESIS OF SPHERICAL KINEMATIC CHAINS

DETC APPROXIMATE MOTION SYNTHESIS OF SPHERICAL KINEMATIC CHAINS Proceedings of the ASME 2007 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2007 September 4-7, 2007, Las Vegas, Nevada, USA DETC2007-34372

More information

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013 Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork

More information

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007,

More information

A Comparative Study of Frequency-domain Finite Element Updating Approaches Using Different Optimization Procedures

A Comparative Study of Frequency-domain Finite Element Updating Approaches Using Different Optimization Procedures A Comparative Study of Frequency-domain Finite Element Updating Approaches Using Different Optimization Procedures Xinjun DONG 1, Yang WANG 1* 1 School of Civil and Environmental Engineering, Georgia Institute

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Generalized trace ratio optimization and applications

Generalized trace ratio optimization and applications Generalized trace ratio optimization and applications Mohammed Bellalij, Saïd Hanafi, Rita Macedo and Raca Todosijevic University of Valenciennes, France PGMO Days, 2-4 October 2013 ENSTA ParisTech PGMO

More information

The Curse of Dimensionality

The Curse of Dimensionality The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

RELIABILITY OF PARAMETRIC ERROR ON CALIBRATION OF CMM

RELIABILITY OF PARAMETRIC ERROR ON CALIBRATION OF CMM RELIABILITY OF PARAMETRIC ERROR ON CALIBRATION OF CMM M. Abbe 1, K. Takamasu 2 and S. Ozono 2 1 Mitutoyo Corporation, 1-2-1, Sakato, Takatsu, Kawasaki, 213-12, Japan 2 The University of Tokyo, 7-3-1, Hongo,

More information

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016 Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)

More information

Geostatistics 2D GMS 7.0 TUTORIALS. 1 Introduction. 1.1 Contents

Geostatistics 2D GMS 7.0 TUTORIALS. 1 Introduction. 1.1 Contents GMS 7.0 TUTORIALS 1 Introduction Two-dimensional geostatistics (interpolation) can be performed in GMS using the 2D Scatter Point module. The module is used to interpolate from sets of 2D scatter points

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

Projective Integration Methods for Distributions. Λ C. W. Gear y November 26, 2001 Abstract Projective methods were introduced in an earlier paper. In this paper we consider their application to the output

More information

Finding a Best Fit Plane to Non-coplanar Point-cloud Data Using Non Linear and Linear Equations

Finding a Best Fit Plane to Non-coplanar Point-cloud Data Using Non Linear and Linear Equations AIJSTPME (013) 6(): 17-3 Finding a Best Fit Plane to Non-coplanar Point-cloud Data Using Non Linear and Linear Equations Mulay A. Production Engineering Department, College of Engineering, Pune, India

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

GEOMETRIC TOOLS FOR COMPUTER GRAPHICS

GEOMETRIC TOOLS FOR COMPUTER GRAPHICS GEOMETRIC TOOLS FOR COMPUTER GRAPHICS PHILIP J. SCHNEIDER DAVID H. EBERLY MORGAN KAUFMANN PUBLISHERS A N I M P R I N T O F E L S E V I E R S C I E N C E A M S T E R D A M B O S T O N L O N D O N N E W

More information

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun

More information

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization Sirisha Rangavajhala

More information

9.1: GRAPHING QUADRATICS ALGEBRA 1

9.1: GRAPHING QUADRATICS ALGEBRA 1 9.1: GRAPHING QUADRATICS ALGEBRA 1 OBJECTIVES I will be able to graph quadratics: Given in Standard Form Given in Vertex Form Given in Intercept Form What does the graph of a quadratic look like? https://www.desmos.com/calculator

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

The optimum design of a moving PM-type linear motor for resonance operating refrigerant compressor

The optimum design of a moving PM-type linear motor for resonance operating refrigerant compressor International Journal of Applied Electromagnetics and Mechanics 33 (2010) 673 680 673 DOI 10.3233/JAE-2010-1172 IOS Press The optimum design of a moving PM-type linear motor for resonance operating refrigerant

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

Concept of Curve Fitting Difference with Interpolation

Concept of Curve Fitting Difference with Interpolation Curve Fitting Content Concept of Curve Fitting Difference with Interpolation Estimation of Linear Parameters by Least Squares Curve Fitting by Polynomial Least Squares Estimation of Non-linear Parameters

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Outline Task-Driven Sensing Roles of Sensor Nodes and Utilities Information-Based Sensor Tasking Joint Routing and Information Aggregation Summary Introduction To efficiently

More information

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview Chapter 888 Introduction This procedure generates D-optimal designs for multi-factor experiments with both quantitative and qualitative factors. The factors can have a mixed number of levels. For example,

More information

EARLY INTERIOR-POINT METHODS

EARLY INTERIOR-POINT METHODS C H A P T E R 3 EARLY INTERIOR-POINT METHODS An interior-point algorithm is one that improves a feasible interior solution point of the linear program by steps through the interior, rather than one that

More information

SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS

SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS M. Hofer and H. Pottmann Institute of Geometry Vienna University of Technology, Vienna, Austria hofer@geometrie.tuwien.ac.at, pottmann@geometrie.tuwien.ac.at

More information

Support Vector Machines

Support Vector Machines Support Vector Machines SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions 6. Dealing

More information

3D Accuracy Improvement from an Image Evaluation and Viewpoint Dependency. Keisuke Kinoshita ATR Human Information Science Laboratories

3D Accuracy Improvement from an Image Evaluation and Viewpoint Dependency. Keisuke Kinoshita ATR Human Information Science Laboratories 3D Accuracy Improvement from an Image Evaluation and Viewpoint Dependency Keisuke Kinoshita ATR Human Information Science Laboratories kino@atr.co.jp Abstract In this paper, we focus on a simple but important

More information

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Acta Technica 61, No. 4A/2016, 189 200 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Jianrong Bu 1, Junyan

More information

Position Error Reduction of Kinematic Mechanisms Using Tolerance Analysis and Cost Function

Position Error Reduction of Kinematic Mechanisms Using Tolerance Analysis and Cost Function Position Error Reduction of Kinematic Mechanisms Using Tolerance Analysis and Cost Function B.Moetakef-Imani, M.Pour Department of Mechanical Engineering, Faculty of Engineering, Ferdowsi University of

More information

Guidelines for proper use of Plate elements

Guidelines for proper use of Plate elements Guidelines for proper use of Plate elements In structural analysis using finite element method, the analysis model is created by dividing the entire structure into finite elements. This procedure is known

More information

Honors Precalculus: Solving equations and inequalities graphically and algebraically. Page 1

Honors Precalculus: Solving equations and inequalities graphically and algebraically. Page 1 Solving equations and inequalities graphically and algebraically 1. Plot points on the Cartesian coordinate plane. P.1 2. Represent data graphically using scatter plots, bar graphs, & line graphs. P.1

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

Graphs of Exponential

Graphs of Exponential Graphs of Exponential Functions By: OpenStaxCollege As we discussed in the previous section, exponential functions are used for many realworld applications such as finance, forensics, computer science,

More information

An Iterative Convex Optimization Procedure for Structural System Identification

An Iterative Convex Optimization Procedure for Structural System Identification An Iterative Convex Optimization Procedure for Structural System Identification Dapeng Zhu, Xinjun Dong, Yang Wang 3 School of Civil and Environmental Engineering, Georgia Institute of Technology, 79 Atlantic

More information

Chapter 7: Computation of the Camera Matrix P

Chapter 7: Computation of the Camera Matrix P Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix

More information

An introduction to interpolation and splines

An introduction to interpolation and splines An introduction to interpolation and splines Kenneth H. Carpenter, EECE KSU November 22, 1999 revised November 20, 2001, April 24, 2002, April 14, 2004 1 Introduction Suppose one wishes to draw a curve

More information

Well Analysis: Program psvm_welllogs

Well Analysis: Program psvm_welllogs Proximal Support Vector Machine Classification on Well Logs Overview Support vector machine (SVM) is a recent supervised machine learning technique that is widely used in text detection, image recognition

More information

Nelder-Mead Enhanced Extreme Learning Machine

Nelder-Mead Enhanced Extreme Learning Machine Philip Reiner, Bogdan M. Wilamowski, "Nelder-Mead Enhanced Extreme Learning Machine", 7-th IEEE Intelligent Engineering Systems Conference, INES 23, Costa Rica, June 9-2., 29, pp. 225-23 Nelder-Mead Enhanced

More information

Module 4. Non-linear machine learning econometrics: Support Vector Machine

Module 4. Non-linear machine learning econometrics: Support Vector Machine Module 4. Non-linear machine learning econometrics: Support Vector Machine THE CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION Introduction When the assumption of linearity

More information

EC422 Mathematical Economics 2

EC422 Mathematical Economics 2 EC422 Mathematical Economics 2 Chaiyuth Punyasavatsut Chaiyuth Punyasavatust 1 Course materials and evaluation Texts: Dixit, A.K ; Sydsaeter et al. Grading: 40,30,30. OK or not. Resources: ftp://econ.tu.ac.th/class/archan/c

More information

1 Methods for Posterior Simulation

1 Methods for Posterior Simulation 1 Methods for Posterior Simulation Let p(θ y) be the posterior. simulation. Koop presents four methods for (posterior) 1. Monte Carlo integration: draw from p(θ y). 2. Gibbs sampler: sequentially drawing

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

PATCH TEST OF HEXAHEDRAL ELEMENT

PATCH TEST OF HEXAHEDRAL ELEMENT Annual Report of ADVENTURE Project ADV-99- (999) PATCH TEST OF HEXAHEDRAL ELEMENT Yoshikazu ISHIHARA * and Hirohisa NOGUCHI * * Mitsubishi Research Institute, Inc. e-mail: y-ishi@mri.co.jp * Department

More information

Tracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time

Tracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time Tracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time Zhihua Zou, Jing Xiao Department of Computer Science University of North Carolina Charlotte zzou28@yahoo.com, xiao@uncc.edu

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information