AN EFFICIENT DUAL ALGORITHM FOR VECTORLESS POWER GRID VERIFICATION UNDER LINEAR CURRENT CONSTRAINTS XUANXING XIONG

Size: px
Start display at page:

Download "AN EFFICIENT DUAL ALGORITHM FOR VECTORLESS POWER GRID VERIFICATION UNDER LINEAR CURRENT CONSTRAINTS XUANXING XIONG"

Transcription

1 AN EFFICIENT DUAL ALGORITHM FOR VECTORLESS POWER GRID VERIFICATION UNDER LINEAR CURRENT CONSTRAINTS BY XUANXING XIONG Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering in the Graduate College of the Illinois Institute of Technology Approved Advisor Chicago, Illinois May 2010

2

3 ACKNOWLEDGMENT First and foremost, I would like to thank my research advisor Professor Jia Wang. It is my honor to be his student. He has guided me to study VLSI design automation algorithms and taught me how to think mathematically. I appreciate all his contributions of time, ideas and funding to make my graduate study productive. His joy and enthusiasm toward academic research is contagious and motivational for me. I am also thankful for the excellent example he has provided as a successful professor. For this dissertation, I also would like to thank my thesis committee: Prof. Jafar Saniie and Prof. Zuyi Li for their time, interest and comments. Last, but not least, I would like to thank my wife Jingjin Wu for her understanding and love during the past few years. Her support and encouragement was in the end what made this dissertation possible.

4 TABLE OF CONTENTS Page ACKNOWLEDGEMENT iii LIST OF TABLES v LIST OF FIGURES vi LIST OF SYMBOLS vii ABSTRACT ix CHAPTER 1. INTRODUCTION Motivation Contributions Thesis Organization PRELIMINARY Power Grid Model Simulation-based Power Grid verification Vectorless Power Grid Verification ALGORITHM OVERVIEW PROPOSED APPROACH Simplified Dual Problem Convexity of Simplified Dual Problem Solving Simplified Dual Problem Fast Evaluation of Objective Function and Its Subgradient Computing c l The DualVD Algorithm EXPERIMENTAL RESULTS CONCLUSION BIBLIOGRAPHY

5 LIST OF TABLES Table Page 5.1 Runtime comparison of DualVD and DirectVD using the interior point method (4 global constraints) Runtime comparison of DualVD and DirectVD using the interior point method (10 global constraints) Runtime comparison of DualVD and DirectVD using the simplex method (4 global constraints) Runtime comparison of DualVD and DirectVD using the simplex method (10 global constraints)

6 LIST OF FIGURES Figure Page 2.1 Part of a typical RLC power grid model Part of a typical RC power grid model The DualVD algorithm Comparison of runtime per node

7 LIST OF SYMBOLS Symbol n m Definition number of non-vdd nodes in the power grid number of global constraints A n n matrix equal to G or G+ C h analysis respectively for DC analysis and transient c l C G i i(t) I G I L J k n 1 vector for the lth non-vdd node n n diagonal capacitance matrix n n conductance matrix n 1 current source vector n 1 current source vector at time t m 1 upper-bound vector of global constraints n 1 upper-bound vector of local constraints a group of sets indicating the position of duplicated columns of U, detail in section 4.4 Sum IL a group of arrays, detailed in section 4.4 Sum ILC a group of arrays, detailed in section 4.4 S p U u j U ū k the set of selected points in the pth iteration of the cutting plane method, detailed in section 4.3 m n 0/1 matrix indicating the assignment of current sources to global constraints jth column of U the set of distinct set columns of U the kth element of U

8 v v v(t) Y β γ γ cp γ Sp δ cp δ pcg voltage drop of an non-vdd node (V) n 1 voltage drop vector n 1 voltage drop vector at time t the objective function of the cutting plane linear programming problem n 1 vector of Lagrange multipliers of the dual problem, detailed in section 4.1 m 1 vector of Lagrange multipliers of the dual problem, and also the vector of variables of the simplified dual problem, detailed in section 4.1 the solution of the simplified dual problem derived by using the cutting plane method, detail in section 4.3 the minimizer of the linear programming problem in the pth iteration of the cutting plane method, section 4.3 user-specified error tolerance value for the cutting plane method user-specified error tolerance value for the preconditioned conjugate gradient method

9 ABSTRACT Vectorless power grid verification makes it possible to evaluate worst-case voltage drops without enumerating possible current waveforms. Under linear current constraints, the vectorless power grid verification problem can be formulated and solved as a linear programming (LP) problem. However, previous approaches suffer from long runtime due to the large problem size. In this paper, we design the DualVD algorithm that efficiently computes the worst-case voltage drops in an RC power grid. Our algorithm combines a novel dual approach to solve the LP problem, and a preconditioned conjugate gradient power grid analyzer. Our dual approach exploits the structure of the problem to simplify its dual problem into a convex problem, which is then solved by the cutting-plane method. Experimental results show that our algorithm is extremely efficient it takes less than an hour to complete the verification of a power grid with more than 50K nodes and it takes less than 1 second to verify one node in a power grid with more than 500K nodes.

10 1 CHAPTER 1 INTRODUCTION 1.1 Motivation The increasing complexity of modern integrated circuits (ICs) has made the power grid verification a challenging task. A robust power grid is essential to guarantee the correct functionality at desired performance of the circuit. When designing ICs, it is indispensable to verify that the power grid design is safe, which means that the power supply noise at each node is acceptable for all possible runtime situations. There are mainly two sources of power supply noises: IR drops and Ldi/dt noise. In modern designs, the increasing number of transistors and the shrinking interconnect sizes lead to large IR drops, and the high clock frequency results in substantial amount of Ldi/dt noise. Moreover, as supply voltages are lowered to reduce power consumption while subthreshold voltages are decreased for better performance, the circuit become more vulnerable to power supply noises than ever before. Hence, full-chip power grid verification with high accuracy has become critical. Today, the power grid is typically verified by simulation. Using the current waveforms of the circuit blocks attached to each node, one can simulate the power grid to evaluate the power supply noises. However, this simulation-based technique have two major drawbacks. First, it is either intractable or computationally prohibitive to enumerate all the possible waveform combinations. Second, one cannot perform power grid verification until the current waveforms of the circuit blocks are available, which makes accurate early verification impossible. Therefore, a verification approach, which is not dependent on simulation, is highly desirable. To address such need, vectorless power grid verification has been proposed [11, 6, 2, 15] to evaluate the worst-case power supply noise without explicitly enumer-

11 2 ating all input current waveforms. In [11, 6, 2], feasible input current waveforms are characterized into linear current constraints to cover all possible input current waveforms, such that the maximum voltage drops in an RC power grid can be obtained after formulating and solving linear programming (LP) problems. In [15], the assumptions are further refined by introducing integer variables to model the uncertain working modes of circuit blocks, and integer linear programming (ILP) problems are formulated and solved. Unfortunately, solving all of these problems is time-consuming or even prohibitive for large grids. [2] takes advantage of the grid locality to generate a reduced-size LP problem for each node, and accepts a user specified over estimation margin to control the precision of solutions. This approach sacrifices the accuracy in order to achieve speed-up by reducing the number of variables in the LP problems, so it is not applicable when high precision is desired. Moreover, the speed-up is limited since to generate the reduced-size LP problems takes extra runtime. In summary, vectorless power grid verification under linear current constraints is an important alternative approach for verifying the power grid design. It can be formulated into LP problems, which cannot be solved by current algorithms in a timely manner. Therefore, it is of great interest to develop novel algorithms to solve these LP problems efficiently, such that the vectorless verification of large power grids can be practical. 1.2 Contributions In this thesis work, we design the DualVD algorithm to solve the vectorless power grid verification problem under linear current constraints for RC power grids. We propose a novel dual approach to solve the associated LP problem efficiently by exploiting its special structure. We simplify the dual problem of the LP problem by eliminating most decision variables and constraints. We show the simplified dual problem is a convex problem with a small number of decision variables and simple

12 3 form of constraints. We then solve the convex problem by the cutting-plane method and provide means to speed-up the evaluation of the objective function and its subgradient. This dual approach is then combined with a preconditioned conjugate gradient power grid analyzer based on [18]. Experimental results show that the proposed dual approach is much faster than solving the LP problems directly, and our DualVD algorithm is much more efficient when compared with the method in [2]. 1.3 Thesis Organization The rest of this thesis is organized as following. Background information and related works are introduced in Chapter 2. The algorithmic idea is developed in Chapter 3. The technical detail of the proposed algorithm is presented in Chapter 4. After experimental results are shown in Chapter 5, we conclude this thesis and present the future work in Chapter 6.

13 4 CHAPTER 2 PRELIMINARY This thesis work focuses on the vectorless power grid verification for an RC power grid. To provide a backdrop for explaining the rest of this thesis, in this chapter we first present the power grid models, simulation-based power grid verification techniques, and related works. Then we introduce the vectorless power grid verification and problem formulation. 2.1 Power Grid Model In the on-chip power grid, there are two kinds of networks: VDD network and ground network. These two networks provide supply voltage and reference voltage for the circuit respectively, and both of them must be verified. Their models are the same except for the fact that the VDD network is connected with VDD pads, while the ground network is connected with GND pads. All the power grid verification techniques for the VDD network can also be applied to verify the ground network. Therefore, researchers just focus on the VDD network to develop new algorithms, and we also consider the VDD network in this thesis work. There are mainly two types of power grid models: RLC power grid and RC power grid. We first introduce the RLC power grid, then the RC power grid is just a simplification of the RLC power grid RLC Power Grid. Fig. 2.1 shows a typical RLC power grid model, consisting of VDD pads, non-vdd nodes, resistors, inductors, current sources and capacitors. The VDD pads are supported by ideal voltage sources, while the non- VDD nodes are supported by these VDD pads. The resistors represent the wire resistances, and the inductors represent the inductances of wires or the grid-package interconnects. Conventionally, there is a resistor on each edge of the power grid, since one cannot neglect the wire resistance of any edge in power grid verification.

14 5 Whereas the inductances of many edges are negligible, so in the power grid some edges are modeled by a resistor, while the other edges are modeled by a resistor and an inductor. Note that the edges connected with VDD pads always have an inductor, which represents the inductances of the grid-package interconnects. There is a current source and a capacitor connected to each non-vdd node. The current source represents the current drawn by the circuit block attached to that node, and the capacitor represents the load capacitance of that circuit block. VDD VDD VDD Figure 2.1. Part of a typical RLC power grid model The RLC power grid has two types of voltage noises: IR drops and Ldi/dt noise. The IR drops are attributable to the resistivity of wires (including metal rails), while the Ldi/dt noise is due to the inductance of some wires (especially metal rails), and the grid-package interconnects. Note in particular that the inductive noise can result in either voltage drops or overshoots, which means that the voltage at a node may exceed the supply voltage. Therefore, in order to verify an RLC power grid, one should check the voltage noises in both directions, and guarantee that both the

15 6 voltage drop and the voltage overshoot of each node are sufficiently small, so that the functionality and performance of the circuit can be ensured RC Power Grid. Removing all the inductors from the RLC power grid, we can get the RC power grid shown in Fig The meaning of all the VDD pads, non- VDD nodes, resistors, current sources, and capacitors is exactly the same as defined in the RLC power grid. Since the RC power gird does not have inductors, the only source of the voltage noises is the IR drops. Then, to verify an RC power grid, one only needs to evaluate the voltage drop at each node. VDD VDD VDD Figure 2.2. Part of a typical RC power grid model 2.2 Simulation-based Power Grid verification Traditionally, power grid is verified by simulation. Using the current waveforms of circuit blocks, designers simulate the power grid using transient analysis or DC analysis to the know the voltage noise at each node. The transient analysis considers the effects of capacitors, inductors, and time-varying current waveforms; while the DC analysis derives the steady-state voltages.

16 Problem Formulation. Consider the transient analysis of an RC power grid. Let v(t) be the vector of the voltage drops of non-vdd nodes, i(t) be the vector of all the current sources attached to these non-vdd nodes, G be the conductance matrix, and C be the matrix introduced by capacitors. According to [11], the following system of equations must hold if a single supply voltage is assumed. Gv(t) + C v(t) = i(t). (2.1) Note that if we assume that there are n non-vdd nodes, then both v(t) and i(t) are n 1 vectors, G is an n n symmetric M-matrix, and C is an n n diagonal matrix. Using a time step h, Eq.(2.1) can be rewritten as Av(t) = C v(t h) + i(t), (2.2) h where A = G+ C h is also a symmetric M-matrix. For DC analysis, Eq.(2.1) is simplified to Av = i, (2.3) where A = G, v and i are the vector of voltage drops and current sources respectively. For the RLC power grid, the formulation of transient analysis is more complicated, since we need to take the inductances into consideration. [9], [5] and [1] are representative studies for handling inductances in power grid verification. When performing transient analysis, one must simulate the the power grid for many time steps in order to derive reliable voltage of each node. Note that simulating the power grid for one time step requires solving Eq.(2.1) to derive v(t). Similarly, the DC analysis requires solving Eq.(2.3) to derive v. Traditional approaches exploit the sparse and positive definite property of G to solve these equations. However, as

17 8 modern power grids contain millions of nodes, there are millions of variables in these equations, so solving Eq.(2.1) or (2.3) is usually prohibitive Previous Approaches. Traditionally, linear equations can be solved by Gaussian elimination, or LU factorization together with a forward solve and a backward solve. Whereas because of the large problem size in DC and transient analysis of the power grids, simulating practical power grids using these approaches consumes prohibitive amount of memory and runtime. Hence, these approaches are not applicable for power grid verification. The circuit simulation tool SPICE [23] can be employed to simulate the power grids, but it always takes extensive amount of time to simulate practical power grids in SPICE. In order to analyze the power grids efficiently, [4] proposes a preconditioned conjugate gradient (PCG) algorithm for solving the linear equations for DC and transient analysis, and this algorithm achieves significant speedup and memory reduction for both DC and transient analysis of the power grid compared to SPICE. [14] proposes to apply random walks to solve the DC and transient analysis problem of the power grid. This approach is based on the fact that random walks can be used in solving linear equations as studied in [7] and [24]. Using this method, one does not need to solve the linear equations directly. Instead, a number of random walks are performed on the power grid, then one can derive the voltage at each node according to the statistical information of these random walks. This approach achieves significant speedup over the conventional methods. A later work [20] presents an incremental algorithm for power grid analysis based on random walks, and it shows that the random walk based method can be accelerated further. Unfortunately, the random walk based approaches have poor accuracy. Although one can derive reliable voltage level of each node by performing sufficient large number of random walks, there is no guarantee that the derived voltage is exact.

18 9 As the challenge in power grid analysis is mainly attributable to the large problem size, some researchers try to simplify the problem by exploiting the structure of the power grid. [13], [12], and [21] propose to employ a multi-grid method for power grid analysis. The original power gird is reduced to a coarsen grid with much smaller number of nodes. Designers can simulate the coarsen grid to derive the voltage of nodes, then the solution is mapped back to the original grid. This approach is capable of reducing the runtime and memory requirement significantly, but has poor accuracy because the grid reduction will introduce errors. A hierarchical approach for power grid analysis is proposed in [25]. This approach divides the power grid into several partitions, and creates a macro-model for each partition. Each macro-model only has small amount of nodes, which correspond to the ports of each partition. At the global scope, the power grid analysis is performed using these macro-models to derive the voltage of these ports. Then one can simulate each partition with the derived voltage of these ports to get the voltage of each node. By partitioning the power grid, and creating micro-models for each partition, this hierarchical approach breaks the original power grid analysis problem down to a number of small problems, which can be solved efficiently. This hierarchical approach has good accuracy, and largely accelerates the power grid analysis, and it is further studied in [19]. Based on this hierarchical approach, [16] and [17] propose a hierarchical random walk approach, which can achieve additional speedup. Although various approaches have been proposed for power grid verification using DC and transient analysis, verification of large power grids by simulation is still very time-consuming and even prohibitive. At first, as there are millions of nodes in modern designs, it is either intractable or computationally prohibitive to simulate all possible waveforms. Second, the simulation based approach requires the current wavefroms, which are not available until the circuit design is done. However, it is

19 10 desirable that power grid design can be verified early in the design flow, so that the grid modification could be much easier. Hence, we need a verification approach, which is not dependent on simulation, i.e., a vectorless approach. 2.3 Vectorless Power Grid Verification To satisfy the design needs, vectorless power grid verification was proposed in [11], and studied in [6, 2]. This problem typically considers an RC power grid, and the verification can be performed using either DC or transient analysis. The objective is to evaluate the worst-case voltage drops Current Constraints. According to Eq.(2.1) and (2.2), the amount of voltage drops depends on the waveform of current excitations. As it is pessimistic to assume a peak current for each current source and computationally prohibitive to enumerate all possible current waveforms, current constraints are introduced to capture the infinite many current excitations in an optimization framework. There are two kinds of current constraints: local constraints and global constraints. The local constraints define an upper-bound for every current source, 0 i(t) I L, t, where I L 0 is an n 1 upper-bound vector. The global constraints define upper bounds for groups of current sources, Ui(t) I G, t. Here we assume that there are m current source groups and m is much smaller than n, U is an m n 0/1 matrix indicating the assignments of current sources to groups, and I G 0 is an m 1 upper-bound vector.

20 11 In practice, designers can simulate the circuit block to generate these constraints. If the circuit block is not available or it is to large to simulate, then one can refer to previous designs, or rely on design expertise or engineering judgement to derive some reliable current upper bounds as current constraints Problem Formulation. As summarized in [2], vectorless power grid verification can be performed under two power grid analysis models, i.e. the DC analysis model and the transient analysis model with a time-step h, and the key problem can be formulated as the following maxvd-lcc (maximum voltage drop under linear current constraints) problem. Problem 1 (maxvd-lcc) Assume there are n non-vdd nodes and m global current constraints. Let A = G if the DC analysis model is of concern or A = G + C h if the transient analysis model is of concern. Let I L 0 and I G 0 be the upperbound vectors for local and global current constraints respectively. Let U be a 0/1 matrix. Suppose that v and i are the vectors of decision variables and let v l be the l th component of v for 1 l n. Solve the following optimization problem for every 1 l n, Maximize v l s.t. (2.4) Av = i, 0 i I L, Ui I G.

21 12 CHAPTER 3 ALGORITHM OVERVIEW For every 1 l n, the optimization problem in Eq.(2.4) is a linear programming (LP) problem since both the objective function and the constraints are linear functions of the decision variables. As there are n optimization problems and n is usually large for any practical power grid, such LP problems have to be solved very efficiently. Intuitively, instead of sending the whole Eq.(2.4) to a LP solver, one would decompose the optimization problem first utilizing the fact that A is a M-matrix and thus is invertible. Let e l be an n 1 vector of all 0 s except the l th component being 1. Let c l = A 1 e l, which can be computed by solving Ax = e l using any power grid analysis algorithm. Because Av = i and A is symmetric, we have v l = c T l i. Then the optimization problem becomes Maximize c T l i s.t. (3.1) 0 i I L, Ui I G, which is also an LP problem but with roughly half of the decision variables and constraints in comparison to Eq.(2.4). The above intuitive idea was partially explored in [2]. As the optimization problem in Eq.(3.1) still involves a large number of decision variables and constraints for any practical power grid, it was proposed by [2] to compute an approximated c l with only a few non-zero components. Then most decision variables and local constraints corresponding to the zero element in the approximated c l can be dropped from Eq.(3.1) and thus it can be efficiently solved by any LP solver. However, this approach has two drawbacks. First, the number of non-zero elements depends on

22 13 the accuracy of the approximation. If a higher level of accuracy is required, the approximated c l will contain more non-zero components and the optimization problem in Eq.(3.1) is harder to solve. Second, as indicated by our preliminary experiments, the computation of the approximated c l does consume a significant amount of running time in comparison to obtaining the exact c l using efficient power grid analysis algorithms. Therefore, it is of great interest to solve the optimization problem in Eq.(3.1) efficiently with the exact c l, such that the advances in power grid analysis can be leveraged. Clearly, to achieve such a goal, one must be able to exploit the structures in the problem. Our major contribution in this thesis work is an efficient dual approach to solve Eq.(3.1) exploiting its special structures. We exploit the special structure of the local constraints to simplify the dual problem Eq.(3.1) by eliminating most decision variables and constraints. We show the simplified dual problem is a convex problem with m decision variables and m constraints requiring the decision variables being non-negative. We propose to solve the convex problem by the cutting-plane method and to speed-up the evaluation of the objective function and its subgradient by exploiting the special structure of the global constraints. We design the DualVD algorithm to solve the maxvd-lcc problem efficiently combining a preconditioned conjugate gradient power grid analyzer based on [18] to obtain c l, and our dual approach to solve Eq.(3.1). The details are presented in the next chapter.

23 14 CHAPTER 4 PROPOSED APPROACH 4.1 Simplified Dual Problem Recall that m is much smaller than n. Most constraints in Eq.(3.1) are simply the bounds on the decision variables. To exploit such special structure in the primal problem, a typical approach is to investigate the corresponding dual problem [3]. Let β be the vector of Lagrangian multipliers corresponding to the constraints i I L and γ be the vector of Lagrangian multipliers corresponding to the constraints Ui I G. The dual problem of Eq.(3.1) can be formulated as follows. Minimize I T L β + I T Gγ s.t. (4.1) β + U T γ c l, β 0, γ 0. As Eq.(3.1) has a feasible solution i = 0, the optimal value of Eq.(3.1) and Eq.(4.1) are the same due to the strong duality of LPs. Therefore, one can solve Eq.(4.1) in order to obtain the maximum voltage drop. Let γ 0 be an arbitrary m 1 vector. For 1 j n, denote the j th column of U by u j and the j th component of c l by c l,j. Let the vector β = (β 1, β 2,..., β n) T where β j = max(0, c l,j u T j γ), 1 j n. Then obviously (β, γ) is a feasible solution of Eq.(4.1). Further more, for any feasible solution (β, γ) of Eq.(4.1), it must be true that β β.

24 15 Since I L 0, it implies I T L β + I T Gγ I T L β + I T Gγ, which means that for Eq.(4.1), the objective value corresponding to a feasible solution (β, γ) is always no smaller than that of (β, γ). Denote the j th component of I L by I L,j. We propose to simplify Eq.(4.1) by eliminating β using β and a new optimization problem is thus formulated as follows, Minimize D(γ) s.t. γ 0, (4.2) where the objective function D(γ) is defined as n D(γ) = I T Gγ + I L,j max(0, c l,j u T j γ), j=1 According to the above discussions, the optimization problems in Eq.(3.1), (4.1), and (4.2) have the following property. Theorem 1 The optimization problems in Eq.(3.1), (4.1), and (4.2) have optimal solutions and their optimal values are the same. Though the three optimization problems have the same optimal value, the benefit of the formulation in Eq.(4.2), in comparison to the other two, is that the number of decision variables and constraints is much smaller because of the fact that m n, and that the constraints are extremely simple. The concern is about the objective function D(γ), which seems to be much more complicated than those of the other two because of the max operations involved. However, as we shall prove in the next section that D(γ) is convex with respect to γ, Eq.(4.2) can be solved by efficient convex programming techniques.

25 Convexity of Simplified Dual Problem Consider a particular m 1 vector γ 0. For 1 j n, define 1, if c l,j u T E j (γ) = j γ, 0, otherwise. Then clearly max(0, c l,j u T j γ) = E j (γ)(c l,j u T j γ). (4.3) Moreover, let γ be an arbitrary m 1 non-negative vector. As E j (γ) is either 0 or 1, it must be true that, max(0, c l,j u T j γ ) E j (γ)(c l,j u T j γ ). (4.4) Recall that I L,i 0. According to Eq.(4.3) and (4.4), we have, D(γ ) D(γ) n ( = I L,j max(0, cl,j u T j γ ) max(0, c l,j u T j γ) ) j=1 + I T G(γ γ) n ( I L,j Ej (γ)(c l,j u T j γ ) E j (γ)(c l,j u T j γ) ) j=1 + I T G(γ γ) n = (I G I L,j E j (γ)u j ) T (γ γ) j=1 Therefore, the following lemma must hold. Lemma 1 D(γ) is a convex function of γ. Further more, let g(γ) be an m 1 vector

26 17 defined as n g(γ) = I G I L,j E j (γ)u j. j=1 Then g(γ) is a subgradient of D(γ). 4.3 Solving Simplified Dual Problem Lemma 1 implies that the optimization problem in Eq.(4.2) is a convex programming problem. Since the subgradient of the objective function can be computed accordingly, we propose to solve Eq.(4.2) by Kelley s cutting-plane method [10]. To apply the cutting-place method, we shall first limit the search of the optimal solution of Eq.(4.2) to a bounded set. According to Theorem 1, Eq.(4.2) has an optimal solution. Assume that γ = (γ 1, γ 2,..., γ m) T is such an optimal solution. Define c max = max(0, c l,1, c l,2,..., c l,n ). If there exists some k such that 1 k m and γ k > c max, then consider the vector γ that is obtained from γ by replacing γ k with c max. Obviously, γ γ 0. Moreover, since each component of u j is either 0 or 1, we have max(0, c l,j u T j γ ) = max(0, c l,j u T j γ ), 1 j n. Recall that I G 0. We thus have D(γ ) D(γ ), and then it must be true that D(γ ) = D(γ ). Therefore, one can solve Eq.(4.2) by only examining the feasible solutions in a bounded set as stated in the following lemma. Lemma 2 Let γ max be an m 1 vector with all components being c max. Then there is an optimal solution γ of Eq.(4.2) satisfying that, 0 γ γ max.

27 18 We then present the cutting-plane method to solve Eq.(4.2). For a finite set S of m 1 non-negative vectors, define the optimization problem CP(S) as follows, Minimize Y s.t. (4.5) Y D(γ ) + (γ γ ) T g(γ ), γ S, 0 γ γ max, where the decision variables are Y and γ. It is straight-forward that CP(S) is a LP problem and has an optimal solution. Let Y S and γ S be such optimal solution. The cutting-plane method solves Eq.(4.2) by computing a series of sets S 0 S 1 S 2 iteratively. Initially, S 0 = {γ 0 }, where γ 0 can be any feasible m 1 vector satisfying 0 γ 0 γ max. We use γ 0 = 0 in our implementation. For p 0, the set S p+1 is computed as S p {γ Sp } after solving the the LP problem CP(S p ). The iterations can be terminated when min γ Sp+1 D(γ) is close enough to the optimal value of Eq.(4.2), where the gap between these two can be computed according to the following lemma implied by the cutting-plane method. Lemma 3 Let γ be an optimal solution of Eq.(4.2). Then Y Sp D(γ ) min γ S p+1 D(γ), p 0. The convergence of iterations is guaranteed by the following lemma. Lemma 4 There exists a p such that γ = γ Sp. Proof According to the formulation of the sub-gradient in Lemma 1, there is a maximum of 2 n different sub-gradients of D(γ). In other words, D(γ) is shaped by a maximum of 2 n planes in the m-dimensional space. Using the cutting-plane method,

28 19 we actually derive one of these planes in each iteration. Due to the intrinsic property of the cutting-plane method, if we derive the same plane twice, then we already find the optimal solution γ in that iteration. As there are only finite number of possible planes, if we try infinite number of iterations using the cutting-plane method, we will definitely get the same plane twice in less than 2 n + 1 iterations, which means that we can definitely derive γ in finite number of iterations. Therefore, there exists a p, such that we can derive γ in the p th iteration. Equivalently, we can say that there exists a p such that γ = γ Sp. In our implementations, a user-specified error-tolerance δ cp is used to terminate the iterations when The vector γ cp, defined as min D(γ) Y Sp δ cp. (4.6) γ S p+1 γ cp = argmin D(γ), (4.7) γ S p+1 is reported as the optimal solution of Eq.(4.2). Based on Lemma 3, the gap between D(γ cp ) and D(γ ) is stated in the following lemma. Lemma 5 D(γ cp ) δ cp D(γ ) D(γ cp ). Note that although multiple LP problems should be solved in the cutting-plane method for Eq.(4.2), since these problems are small in size, it takes much less running time than solving a single but large LP problem as either Eq.(3.1) or Eq.(4.1) 4.4 Fast Evaluation of Objective Function and Its Subgradient For the cutting-plane method introduced in the previous section, although the LP problems CP(S p ) can be solved efficiently due to their small sizes, they have to

29 20 be formulated first, and the computational complexity to formulate them is usually overlooked. To be more specific, for p 1, to formulate CP(S p ) requires to compute D(γ Sp 1 ) and g(γ Sp 1 ). Apparently, a straight-forward approach takes O(nm) time to compute D and g according to their definitions. Therefore, as the size of the problem CP(S p ) is much less than n, it can be more expensive to formulate CP(S p ) than to solve it, which was also confirmed by our preliminary experiments. In this section, we will present a novel pre-processing step exploiting the special structure of the global constraints, i.e. the matrix U, such that D and g can be evaluated with a time complexity that is usually much smaller than O(nm). A general observation of the matrix U is that it has many duplicated columns, i.e., the u j s only take a limited number of distinct values. This observation is due to the fact that when circuit designers specify the global constraints, they tend to specify the constraints according to the circuit hierarchy. Therefore, nodes in the same module would be included in the same set of global constraints, corresponding to the identical columns in U. Let U be the set of the distinct columns of U, i.e., n U = {u j }. j=1 Denote the elements of U as u 1, u 2,..., and u U. The duplicated columns of U can be identified by introducing U index sets defined as J k = {j : u j = u k }, 1 k U.

30 21 Note that the sets J k are a partition of {1, 2,..., n}, i.e., J k J k =, 1 k k U, and U k=1 J k = {1, 2,..., n}. With a given c l, the elements in each set J k can be sorted according to the corresponding components of c l. Therefore, without loss of generality, we can assume that c l,j c l,j, (j, j J k ) (j j ), 1 k U. (4.8) For a particular γ and 1 k U, Eq.(4.8) implies that c l,j u T j γ = c l,j u T k γ is non-decreasing for j J k. So, if the index j k is defined as j k = min{j : c l,j u T k γ 0, j J k }, (4.9) then, c l,j u T j γ 0, (j J k ) (j j k ), c l,j u T j γ < 0, (j J k ) (j < j k ),

31 22 which can be used to simplify the computation of D(γ) as, D(γ) = I T Gγ + = I T Gγ + = I T Gγ + = I T Gγ + n I L,j max(0, c l,j u T j γ) j=1 U k=1 U j J k I L,j max(0, c l,j u T j γ) k=1 (j J k ) (j j k ) U k=1 (j J k ) (j j k ) U u T k γ k=1 I L,j (c l,j u T k γ) I L,j c l,j (j J k ) (j j k ) I L,j (4.10) Define Sum ILC (k, j) and Sum IL (k, j) as Sum ILC (k, j) = I L,j c l,j, j J k, 1 k U, Eq.(4.10) becomes, (j J k ) (j j) Sum IL (k, j) = I L,j, j J k, 1 k U. (j J k ) (j j) U ( D(γ) = I T Gγ+ SumILC (k,j k ) Sum IL (k,j k )u T k γ ). (4.11) k=1 Similarly, we have, U g(γ) = I G Sum IL (k, j k )u k. (4.12) k=1 Based on the above discussion, D(γ) and g(γ) are computed in three steps as follows. First, when U is given in Problem 1, U and J k, 1 k U, are computed according to their definitions. Second, when c l is obtained, J k, 1 k U,

32 23 are sorted to make the assumption in Eq.(4.8) valid, and then Sum ILC (k, j) and Sum IL (k, j), j J k and 1 k U, are computed. Third, with a given γ, the indices j k, 1 k U, are obtained according to their definition by U binary searches since J k is ordered according to Eq.(4.8), and then D(γ) and g(γ) are calculated according to Eq.(4.11) and (4.12). Therefore, for each iteration in the cutting-plane method, only the third step is necessary and the first two steps can be pre-computed. The time complexity of the third step is O(m U + U k=1 log J k ). Recall that U k=1 J k = n as J k is a partition of {1, 2,..., n}. Since, U k=1 U log J k = log k=1 ( U J k log k=1 J k U ) U = U log n U, the time complexity of the third step simplifies to O ( U (m + log n U )). Obviously, U n. Because U log n U increases monotonically for U n, the time complexity of the third step is much smaller than the time complexity O(nm) of the straightforward computation when U n, or no worse than O(nm) if U approaches n. In summary, the time and space complexity associated with the three steps are stated in the following lemma. Lemma 6 The time complexity of the first step is O(nm log U ), of the second step is O(n log n), and of the third step is O ( U (m + log n U )). Extra O(n) storages are required to store the pre-computed values. 4.5 Computing c l The dual approach proposed in the previous sections assumes that c l is known. In this section, we will present a method to compute c l, 1 l n. Recall that c l = A 1 e l, and thus can be computed by solving Ax = e l. Therefore, to obtain all c l, 1 l n, we need to solve a system of linear equations for

33 24 n times with the same left-hand-side (LHS) matrix A but different right-hand-side (RHS) vectors. As A has the good properties of being a symmetric M-matrix, we propose to apply the preconditioned conjugate gradient (PCG) method [8, 4], which is an iterative method to solve a system of linear equations with symmetric LHS matrix, due to its fast convergence with a proper preconditioner and the straight-forward implementation. For the PCG method, the preconditioner M is a matrix with the same size as A such that M A and Mx = r can be easily solved for x given some RHS vector r. To obtain the preconditioner M, we apply the stochastic preconditioning technique proposed in [18]. This preconditioning technique was motivated by a randomwalk power grid analyzer [17], and had been shown to have high quality, especially for large problem sizes, in comparison to conventional deterministic preconditioning techniques. Though it requires more running time to compute M according to [18] in comparison to conventional techniques, it is not a concern in our setting since the preconditioning time will be amortized when the maximum voltage drops are computed for all n nodes. In the stochastic preconditioning technique, we perform random walks on the power grid to generate a sparse lower triangular matrix L and a diagonal matrix Λ. The preconditioner M = L T ΛL satisfies that M A, and the linear equations Mx = r can be solved by a backward substitution followed by a forward substitution. Practically, the PCG method can be terminated when the norm of the residual is small enough. As the residual will not be 0, the computed c l is different from the exact value and thus the optimal value of Eq.(4.2) based on the computed c l is different from the exact maximum voltage drop. Therefore, it is necessary to analyze the error of the optimal value of Eq.(4.2) due to the non-zero residual such that we can convert from a user-specified error-tolerance of the optimal value of Eq.(4.2) to

34 25 a proper termination condition of the PCG method. Details follow. Let c l be the exact solution of Ax = e l and c l be the solution computed by the PCG method. According to Theorem 1, we can consider the optimization problem in Eq.(3.1) instead of Eq.(4.2) as their optimal values are the same. For the optimization problem in Eq.(3.1), let i be an optimal solution and v l be the optimal value for the exact c l, and let i be an optimal solution and v l be the optimal value for the computed c l. Let the residual vector r = e l Ac l. It is obvious that A 1 r = c l c l, v l = c T l i c T l i, v l = c T l i c T l i. Therefore, v l c T l i = c T l i + r T A 1 i = v l + r T A 1 i, (4.13) and, v l = c T l i + r T A 1 i c T l i + r T A 1 i = v l + r T A 1 i. (4.14) On the other hand, since A is an M-matrix, every element of A 1 is nonnegative. So all the components of A 1 i and A 1 i are non-negative since i 0 and i 0. Moreover, let be the maximum component of the solution of the linear equations Ax = I L. All the components of A 1 i and A 1 i should be no larger than. Let r i be the i th component of r. According to Eq.(4.13) and (4.14), we have, v l + n min(r i, 0) v l vl + i=1 n max(r i, 0) Therefore, if a user-specified error-tolerance δ pcg is used to terminate the PCG method when, then the following lemma holds. r 1 = n i=1 i=1 r i δ pcg, (4.15)

35 26 Lemma 7 Define v + l We have, = n i=1 max(r i, 0) as a correction of vl based on the residual. v l + v + l δ pcg v l v l + v + l. 4.6 The DualVD Algorithm Algorithm DualVD Inputs A, I L, I G, U: as specified in Problem 1. δ pcg, δ cp : user-specified error-tolerances. Outputs Maximum voltage drops at each node. 1 Compute the preconditioner M according to [18] 2 Solve Ax = I L to obtain 3 Compute U and J k, 1 k U 4 For l = 1 to n 5 Apply PCG method to compute c l using the termination condition in Eq.(4.15) 6 Sort J k, 1 k U, according to Eq.(4.8) 7 Compute Sum ILC (k, j) and Sum IL (k, j), j J k and 1 k U 8 Apply cutting-plane method to solve Eq.(4.2) using the termination condition in Eq.(4.6) 9 Report the maximum voltage drop at node l to be D(γ cp ) + v + l Figure 4.1. The DualVD algorithm. Combining the PCG method and the proposed dual approach, we design the DualVD algorithm to solve the maxvd-lcc problem as shown in Fig In this algorithm, the pre-processing steps on line 1 to 3 are used to generate necessary data for the PCG method and the cutting-plane method. Then the maximum voltage drop at each node is computed in the body of the loop on line 4. Note that on line 9, we do not report the optimal value of Eq.(4.2) as the maximum voltage drop at a node

36 27 because it could be smaller than the exact maximum voltage drop due to the error introduced by the PCG method. We report a conservative voltage drop within the user-specified error-tolerances based on the optimal value and a correction from the residual of the PCG method. The correctness of the DualVD algorithm is stated in the following theorem, which is implied by Lemma 5, Lemma 7, and the fact that D(γ ) = v l in these two lemma. Theorem 2 For every 1 l n, D(γ cp ) + v + l δ cp δ pcg v l D(γ cp ) + v + l.

37 28 CHAPTER 5 EXPERIMENTAL RESULTS The DualVD algorithm is implemented in C++ and compiled with GCC version 4.2. The LP problems CP(S) in the cutting-plane method are solved by MOSEK [22], a general optimizing engine. For comparison, we replace the proposed dual approach in the DualVD algorithm by MOSEK to solve Eq.(3.1) directly, and call this algorithm DirectVD hereafter. All the experiments are performed on a 64-bit Linux machine with 2.4GHz Intel Q6600 processor and 4GB memory. Note that although the processor has multiple cores, only one core is used for experiments. We generate 7 power grid benchmarks randomly using a setting of 4 metal layers, 1.2V VDD, and various C4 bumps/chip sizes/power consumptions. Both error tolerances δ cp and δ pcg are set to 10 4, i.e. 0.1mV. Two global constraints settings are experimented: one with 4 global constraints and the other with 10. Under such experimental settings, we observe that the PCG method and the cutting-plane method converge in less than 15 and 40 iterations respectively. As MOSEK allows to choose between the simplex method and the interior point method to solve LP problems, we experiment with both options, and the runtime comparison of the DualVD and the DirectVD algorithm using these two methods is presented in Table The voltage drops computed by the two algorithms are the same and thus are excluded from the table. The runtime can be generally partitioned into two parts: the runtime to compute c l and the runtime to solve the LP problem Eq.(3.1) (or more precisely Eq.(4.2) for the DualVD algorithm). Both algorithms have the same runtime for the former as they share the same code to compute c l. Since the latter is the major contribution of this thesis work, we report the runtime for the latter under the columns LP, e.g. for the DualVD algorithm, the runtime in column 4 and 10. Moreover, we report the total runtime under the

38 29 columns Total. The time units are abbreviated such that s, m, h and d denote seconds, minutes, hours, and days respectively. It can be seen that the proposed dual approach drastically reduces the runtime to solve the LP problem such that the DualVD algorithm can achieve significant speedup over the DirectVD algorithm using either the simplex method or the interior point method. For all the test cases that we have evaluated, the simplex method is slightly faster than the the interior point method. Note that for the test cases which take more than 10 hours, the reported runtime is an estimation from the runtime of 1000 nodes chosen randomly. Table 5.1. Runtime comparison of DualVD and DirectVD using the interior point method (4 global constraints) Power Grid 4 Global Constraints Benchmarks DirectVD DualVD Speedup Nodes LP Total LP Total LP Total m 4.82 m s s h 2.11 h 2.56 m m h 5.31 h 4.99 m m h h m m d 1.40 d m 3.06 h d 3.16 d 1.14 h 7.50 h d d h 5.39 d Table 5.2. Runtime comparison of DualVD and DirectVD using the interior point method (10 global constraints) Power Grid 4 Global Constraints Benchmarks DirectVD DualVD Speedup Nodes LP Total LP Total LP Total m 5.35 m 1.41 m 1.90 m h 1.90 h 6.02 m m h 4.96 h m m h h m 1.06 h d 1.60 d m 3.31 h d 3.39 d 1.51 h 7.83 h d d h 5.36 d

39 30 Table 5.3. Runtime comparison of DualVD and DirectVD using the simplex method (4 global constraints) Power Grid 4 Global Constraints Benchmarks DirectVD DualVD Speedup Nodes LP Total LP Total LP Total m 2.99 m 9.76 s s h 1.17 h 1.77 m m h 3.37 h 3.95 m m h 6.96 h 8.32 m m h h m 3.01 h d 1.48 d 1.06 h 7.41 h d d h 5.36 d Table 5.4. Runtime comparison of DualVD and DirectVD using the simplex method (10 global constraints) Power Grid 10 Global Constraints Benchmarks DirectVD DualVD Speedup Nodes LP Total LP Total LP Total m 3.15 m s s h 1.20 h 2.37 m m h 3.51 h 5.12 m m h 7.27 h 9.11 m m h h m 3.02 h d 1.70 d 1.06 h 7.38 h d d h 5.33 d Our experimental results show that the random-walk based preconditioner computation is fairly efficient, taking runtime ranging from a few seconds to a few minutes as the power grids become larger. However, it can be implied from these tables that to compute c l by the PCG method may require more than 80% of runtime for large benchmarks using the DualVD algorithm. We view such observation as a chance to further speedup the DualVD algorithm by leveraging more advanced power grid analysis techniques. In addition, according to Eq.(4.15) and Theorem 2, the

40 31 PCG runtime can be reduced by using a larger δ pcg, which makes a trade-off between solution quality and runtime. Runtime per Node (s) The Algorithm of [2] DirectVD DualVD k 10k 20k 50k 100k 200k 400k 600k Number of Nodes Figure 5.1. Comparison of runtime per node We compare our algorithm (using the simplex method) with the previous work [2] in Fig. 5.1 for the setting of 4 global constraints. Note that this is not a fair comparison since the source code and the benchmarks in [2] are not publicly available. We are able to obtain the type of processor (AMD Opteron 2218) in the linux machine used in [2] from the authors. We scale their runtime with 5mV accuracy based on SPEC CPU2006 results ( to obtain their per node runtime. It is clear that our DualVD algorithm can achieve much higher accuracy (since δ cp + δ pcg = 0.2mV 5mV) with much less runtime. Moreover, our DualVD algorithm exhibits a better scaling trend for large benchmarks as the runtime per node increases slowly as the number of nodes increases. In summary, our DualVD algorithm accelerates the solution of the LP problems, reduces the runtime per node significantly, and makes vectorless verification of large power grids practical.

41 32 CHAPTER 6 CONCLUSION In this thesis, we presented an efficient dual approach to solve the LP problem associated with the vectorless power grid verification problem. Our overall vectorless power grid verification algorithm DualVD combined the proposed dual approach with a random-walk based PCG power grid analyzer. Experimental results confirmed the efficiency of the proposed algorithm. We expect to achieve additional speedup by leveraging more advanced power grid analysis techniques, and by parallel processing due to the fact that each node can be verified independently. It is also interesting to study whether the proposed algorithm can be extended to handle inductance in power grids [1].

A Hierarchical Matrix Inversion Algorithm for Vectorless Power Grid Verification

A Hierarchical Matrix Inversion Algorithm for Vectorless Power Grid Verification A Hierarchical Matrix Inversion Algorithm for Vectorless Power Grid Verification Xuanxing Xiong and Jia Wang Electrical and Computer Engineering Department Illinois Institute of Technology, Chicago, IL

More information

Efficient Second-Order Iterative Methods for IR Drop Analysis in Power Grid

Efficient Second-Order Iterative Methods for IR Drop Analysis in Power Grid Efficient Second-Order Iterative Methods for IR Drop Analysis in Power Grid Yu Zhong Martin D. F. Wong Dept. of Electrical and Computer Engineering Dept. of Electrical and Computer Engineering Univ. of

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

11 Linear Programming

11 Linear Programming 11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Gate Sizing by Lagrangian Relaxation Revisited

Gate Sizing by Lagrangian Relaxation Revisited Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Foundations of Computing

Foundations of Computing Foundations of Computing Darmstadt University of Technology Dept. Computer Science Winter Term 2005 / 2006 Copyright c 2004 by Matthias Müller-Hannemann and Karsten Weihe All rights reserved http://www.algo.informatik.tu-darmstadt.de/

More information

POWER grid analysis is an indispensable step in high-performance

POWER grid analysis is an indispensable step in high-performance 1204 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 24, NO. 8, AUGUST 2005 Power Grid Analysis Using Random Walks Haifeng Qian, Sani R. Nassif, Senior Member, IEEE,

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Problem Formulation. Specialized algorithms are required for clock (and power nets) due to strict specifications for routing such nets.

Problem Formulation. Specialized algorithms are required for clock (and power nets) due to strict specifications for routing such nets. Clock Routing Problem Formulation Specialized algorithms are required for clock (and power nets) due to strict specifications for routing such nets. Better to develop specialized routers for these nets.

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Report of Linear Solver Implementation on GPU

Report of Linear Solver Implementation on GPU Report of Linear Solver Implementation on GPU XIANG LI Abstract As the development of technology and the linear equation solver is used in many aspects such as smart grid, aviation and chemical engineering,

More information

3 Interior Point Method

3 Interior Point Method 3 Interior Point Method Linear programming (LP) is one of the most useful mathematical techniques. Recent advances in computer technology and algorithms have improved computational speed by several orders

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

WE consider the gate-sizing problem, that is, the problem

WE consider the gate-sizing problem, that is, the problem 2760 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL 55, NO 9, OCTOBER 2008 An Efficient Method for Large-Scale Gate Sizing Siddharth Joshi and Stephen Boyd, Fellow, IEEE Abstract We consider

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

Leveraging Set Relations in Exact Set Similarity Join

Leveraging Set Relations in Exact Set Similarity Join Leveraging Set Relations in Exact Set Similarity Join Xubo Wang, Lu Qin, Xuemin Lin, Ying Zhang, and Lijun Chang University of New South Wales, Australia University of Technology Sydney, Australia {xwang,lxue,ljchang}@cse.unsw.edu.au,

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

Advanced Modeling and Simulation Strategies for Power Integrity in High-Speed Designs

Advanced Modeling and Simulation Strategies for Power Integrity in High-Speed Designs Advanced Modeling and Simulation Strategies for Power Integrity in High-Speed Designs Ramachandra Achar Carleton University 5170ME, Dept. of Electronics Ottawa, Ont, Canada K1S 5B6 *Email: achar@doe.carleton.ca;

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage). Linear Programming Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory: Feasible Set, Vertices, Existence of Solutions. Equivalent formulations. Outline

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs CMPUT 675: Topics in Algorithms and Combinatorial Optimization (Fall 2009) Lecture 10,11: General Matching Polytope, Maximum Flow Lecturer: Mohammad R Salavatipour Date: Oct 6 and 8, 2009 Scriber: Mohammad

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

CSE 417 Network Flows (pt 4) Min Cost Flows

CSE 417 Network Flows (pt 4) Min Cost Flows CSE 417 Network Flows (pt 4) Min Cost Flows Reminders > HW6 is due Monday Review of last three lectures > Defined the maximum flow problem find the feasible flow of maximum value flow is feasible if it

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Optimum Placement of Decoupling Capacitors on Packages and Printed Circuit Boards Under the Guidance of Electromagnetic Field Simulation

Optimum Placement of Decoupling Capacitors on Packages and Printed Circuit Boards Under the Guidance of Electromagnetic Field Simulation Optimum Placement of Decoupling Capacitors on Packages and Printed Circuit Boards Under the Guidance of Electromagnetic Field Simulation Yuzhe Chen, Zhaoqing Chen and Jiayuan Fang Department of Electrical

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves

More information

2 Geometry Solutions

2 Geometry Solutions 2 Geometry Solutions jacques@ucsd.edu Here is give problems and solutions in increasing order of difficulty. 2.1 Easier problems Problem 1. What is the minimum number of hyperplanar slices to make a d-dimensional

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

Floorplan and Power/Ground Network Co-Synthesis for Fast Design Convergence

Floorplan and Power/Ground Network Co-Synthesis for Fast Design Convergence Floorplan and Power/Ground Network Co-Synthesis for Fast Design Convergence Chen-Wei Liu 12 and Yao-Wen Chang 2 1 Synopsys Taiwan Limited 2 Department of Electrical Engineering National Taiwan University,

More information

An Improved Measurement Placement Algorithm for Network Observability

An Improved Measurement Placement Algorithm for Network Observability IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 16, NO. 4, NOVEMBER 2001 819 An Improved Measurement Placement Algorithm for Network Observability Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

AM 221: Advanced Optimization Spring 2016

AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,

More information

A Partition-based Algorithm for Power Grid Design using Locality

A Partition-based Algorithm for Power Grid Design using Locality A Partition-based Algorithm for Power Grid Design using Locality 1 Jaskirat Singh Sachin S. Sapatnekar Department of Electrical & Computer Engineering University of Minnesota Minneapolis, MN 55455 jsingh,sachin

More information

Chapter 2 On-Chip Protection Solution for Radio Frequency Integrated Circuits in Standard CMOS Process

Chapter 2 On-Chip Protection Solution for Radio Frequency Integrated Circuits in Standard CMOS Process Chapter 2 On-Chip Protection Solution for Radio Frequency Integrated Circuits in Standard CMOS Process 2.1 Introduction Standard CMOS technologies have been increasingly used in RF IC applications mainly

More information

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding B. O Donoghue E. Chu N. Parikh S. Boyd Convex Optimization and Beyond, Edinburgh, 11/6/2104 1 Outline Cone programming Homogeneous

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Advanced Surface Based MoM Techniques for Packaging and Interconnect Analysis

Advanced Surface Based MoM Techniques for Packaging and Interconnect Analysis Electrical Interconnect and Packaging Advanced Surface Based MoM Techniques for Packaging and Interconnect Analysis Jason Morsey Barry Rubin, Lijun Jiang, Lon Eisenberg, Alina Deutsch Introduction Fast

More information

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Yugoslav Journal of Operations Research Vol 19 (2009), Number 1, 123-132 DOI:10.2298/YUJOR0901123S A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Nikolaos SAMARAS Angelo SIFELARAS

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Symmetric Product Graphs

Symmetric Product Graphs Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 5-20-2015 Symmetric Product Graphs Evan Witz Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

EE5780 Advanced VLSI CAD

EE5780 Advanced VLSI CAD EE5780 Advanced VLSI CAD Lecture 1 Introduction Zhuo Feng 1.1 Prof. Zhuo Feng Office: EERC 513 Phone: 487-3116 Email: zhuofeng@mtu.edu Class Website http://www.ece.mtu.edu/~zhuofeng/ee5780fall2013.html

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

COLUMN GENERATION IN LINEAR PROGRAMMING

COLUMN GENERATION IN LINEAR PROGRAMMING COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for

More information

Lecture 19: November 5

Lecture 19: November 5 0-725/36-725: Convex Optimization Fall 205 Lecturer: Ryan Tibshirani Lecture 9: November 5 Scribes: Hyun Ah Song Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

Parallel Auction Algorithm for Linear Assignment Problem

Parallel Auction Algorithm for Linear Assignment Problem Parallel Auction Algorithm for Linear Assignment Problem Xin Jin 1 Introduction The (linear) assignment problem is one of classic combinatorial optimization problems, first appearing in the studies on

More information

The simplex method and the diameter of a 0-1 polytope

The simplex method and the diameter of a 0-1 polytope The simplex method and the diameter of a 0-1 polytope Tomonari Kitahara and Shinji Mizuno May 2012 Abstract We will derive two main results related to the primal simplex method for an LP on a 0-1 polytope.

More information

Full-chip Routing Optimization with RLC Crosstalk Budgeting

Full-chip Routing Optimization with RLC Crosstalk Budgeting 1 Full-chip Routing Optimization with RLC Crosstalk Budgeting Jinjun Xiong, Lei He, Member, IEEE ABSTRACT Existing layout optimization methods for RLC crosstalk reduction assume a set of interconnects

More information

Combinatorial Gems. Po-Shen Loh. June 2009

Combinatorial Gems. Po-Shen Loh. June 2009 Combinatorial Gems Po-Shen Loh June 2009 Although this lecture does not contain many offical Olympiad problems, the arguments which are used are all common elements of Olympiad problem solving. Some of

More information

Lecture #3: PageRank Algorithm The Mathematics of Google Search

Lecture #3: PageRank Algorithm The Mathematics of Google Search Lecture #3: PageRank Algorithm The Mathematics of Google Search We live in a computer era. Internet is part of our everyday lives and information is only a click away. Just open your favorite search engine,

More information

Matrix-free IPM with GPU acceleration

Matrix-free IPM with GPU acceleration Matrix-free IPM with GPU acceleration Julian Hall, Edmund Smith and Jacek Gondzio School of Mathematics University of Edinburgh jajhall@ed.ac.uk 29th June 2011 Linear programming theory Primal-dual pair

More information

GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS. March 3, 2016

GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS. March 3, 2016 GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS ZOÉ HAMEL March 3, 2016 1. Introduction Let G = (V (G), E(G)) be a graph G (loops and multiple edges not allowed) on the set of vertices V (G) and the set

More information

AMS Behavioral Modeling

AMS Behavioral Modeling CHAPTER 3 AMS Behavioral Modeling Ronald S. Vogelsong, Ph.D. Overview Analog designers have for many decades developed their design using a Bottom-Up design flow. First, they would gain the necessary understanding

More information

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer CS-621 Theory Gems November 21, 2012 Lecture 19 Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer 1 Introduction We continue our exploration of streaming algorithms. First,

More information

Ma/CS 6b Class 26: Art Galleries and Politicians

Ma/CS 6b Class 26: Art Galleries and Politicians Ma/CS 6b Class 26: Art Galleries and Politicians By Adam Sheffer The Art Gallery Problem Problem. We wish to place security cameras at a gallery, such that they cover it completely. Every camera can cover

More information

Fast-Lipschitz Optimization

Fast-Lipschitz Optimization Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology

More information

Goal of the course: The goal is to learn to design and analyze an algorithm. More specifically, you will learn:

Goal of the course: The goal is to learn to design and analyze an algorithm. More specifically, you will learn: CS341 Algorithms 1. Introduction Goal of the course: The goal is to learn to design and analyze an algorithm. More specifically, you will learn: Well-known algorithms; Skills to analyze the correctness

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information