A Decentralized Energy Management System
|
|
- Samson Rice
- 5 years ago
- Views:
Transcription
1 A Decentralized Energy Management System Ceyhun Eksin, Ali Hooshmand and Ratnesh Sharma Energy Management Department at NEC Laboratories America, Inc., Cupertino, CA, 9514 USA. {ceksin, ahooshmand, Abstract The primary goal of an energy management system (EMS) in power networks is to balance the supply and demand in a cost efficient manner given its operating horizon, and uncertainties in generation due to renewable generators and in demand. This goal is formulated as the economic dispatch problem. A centralized energy management system faces issues in scalability due to introduction of new generator or storage units and in robustness due to failures in some of the entities in the grid including the EMS itself. To alleviate these complexities a versatile decentralized energy management system (d-ems) is developed. The d-ems embeds a decentralized solution to the economic dispatch problem (EDP) based on the alternating direction method of multipliers (ADMM) inside a decentralized implementation of the receding horizon control. The ADMM based algorithm solves the EDP for the scheduling horizon. The receding horizon control allows the system to adapt to changes in the forecasts and network configuration. Decentralized protocols to handle changes to the communication network of devices is provided. These device failure and addition protocols entail network information updates only, thanks to the simple initialization of the ADMM algorithm. I. INTRODUCTION An EMS controls all the devices in a power network with the goal of cost effective performance while matching demand at all times. This goal is translated as the economic dispatch problem (EDP) [1]. A centralized management system that solves the EDP requires information from all devices, is prone to catastrophic failures. In addition, centralized EMS has scalability issue for any future expansion. The EMS is interrupted for the integration of any new device to incorporate its operation cost and device specific constraints to the algorithm that EMS implements. Similarly, when a device is up for maintenance a complete shutdown is required. That is, any failure in the EMS or a device in the network enforces a system-wide operation interruption. These issues can be overcome with a decentralized energy management system. We consider a decentralized solution to the economic dispatch problem (d-edp) which is an ADMM algorithm that operates on the dual of the EDP. The d-edp derivation entails reformulating the dual of the EDP as a consensus problem on the price of power imbalance, that is, each device keeps a local price for power imbalance but is constrained to agree with its neighbors on the local price (Section III-B). This reformulation admits a decentralized solution using the ADMM algorithm. In the d-edp, each device synchronously updates individual power and storage profile variables as well as the local price variable by solving a min-max problem. Then devices exchange their local prices with their neighbors and do an ascent step on the dual variables of the local consensus constraints. It is shown in [] that the algorithm converges to the optimal solution asymptotically when the original problem has strong duality and is convex, and the network is connected. We argue that the asymptotic optimality result carries over to the EDP when the network is connected and cost of each device is convex (Section III-C). Numerical implementations show that convergence to the optimal solution is fast and furthermore a near feasible solution is reached early (Section IV). Here we explore the effects of communication network topology and show that the convergence can degrade with the diameter of the network. The EMS faces uncertainty in supply due to renewables and in demand. As a result the EDP is solved based on predictions of renewables and demand into the time horizon. As time progresses EMS can correct its predictions and make new predictions for the new horizon based on new information revealed. To this purpose, we consider a receding horizon control which makes up for prediction errors by solving the EDP for the whole horizon, applying the first time step of the scheduled optimal actions and then solving the EDP at the next stage based on updated forecasts. In Section V, we present a communication protocol that allows for a decentralized implementation of the receding horizon control. The d-edp coupled with the receding horizon control amounts to a fully decentralized EMS (d-ems). Finally, we consider scalability and robustness. We show that the d-ems can incorporate new devices that register and handle device failures on the fly via a simple update of network information during receding horizon control algorithm execution (Section VI). A. Literature Review Previous efforts to solve EDP in a decentralized manner can be separated into two categories based on whether they are anytime feasible or not. Anytime feasible algorithms assume the start from a feasible point and updates remain feasible matching supply and
2 demand at all times [3] [5]. All of these algorithms are synchronous and gradient based, that is, the change in power in each update is a linear function of the graph Laplacian multiplied by the gradient. These algorithms require initial feasibility to remain feasible. Hence, when network is time varying they require another algorithm that determines a feasible starting solution. In addition it is not clear how the algorithms can be modified to include storage units. Among these algorithms only [3] can handle changes in the network. The not anytime feasible approaches are divided into consensus gradient algorithms [6] [8] and primal-dual subgradient algorithms [9]. The proposed consensus gradient based algorithms work for cost functions that have quadratic form. Except for [6] they require consensus iterations to converge at each iteration before updating the power in the next time step. It is not clear how to incorporate storage unit constraints to these consensus based algorithms. Finally, while the subgradient based algorithms can handle asynchronous updates, they are known to have slow convergence which is problematic since the algorithms are not anytime feasible. More recently, an ADMM based algorithm called proximal message passing is proposed in [1]. The model that is considered in [1] incorporates optimal power flow (OPF) equations for AC and DC devices. The algorithm operates on the primal OPF problem. This requires that the local power imbalance at each iteration is known by the devices in the same net where a net is a lossless energy carrier responsible for maintaining power balance among terminals that are connected to it. Consequently, the communication network needs to align with the transmission lines and the approach relies on entities called nets on transmission lines, e.g., power routers, that transmit power imbalance related information to the devices it connects. This makes the decentralized solution prone to failures upon failure of a net device. While the proposed d-edp is not anytime feasible, through numerical experiments we show that feasibility is achieved fast, i.e., an order of magnitude faster than convergence to optimality. Furthermore, the d-ems can handle failures and additions by simply restarting the algorithm after updating the total number of devices and each device s neighborhood set. The d-edp uses a synchronous decentralized consensus based ADMM algorithm which has been shown to converge to the optimal operating points for convex optimization problems with strong duality []. Furthermore, ADMM algorithms empirically have been shown to have faster convergence than subgradient algorithms [11], [1]. For strongly convex functions the decentralized consensus ADMM has linear convergence rate [13], whereas convergence rate of subgradient algorithms is sublinear. Moreover, the proposed d-edp incorporates storage units that have dynamic state of charge. Finally, the communication network does not have to align with a power network and each node in the network corresponds to a device. II. ECONOMIC DISPATCH PROBLEM In energy systems, EDP considers cost optimal power dispatch decisions to match the load profile. We use d(h) to denote the predicted demand at time h H and the demand profile is defined as d := [d(1),..., d(h)]. We assume that the load profile d is known or d represents the predicted load profile. The energy system is composed of devices N that can generate, store or generate and store power. We use G and S to denote the set of generator and storage units, respectively. The set of all devices in the system is then the union of generator and storage units, N := G S. A generator unit i G has the ability to inject p i (h) amounts of power to the system at time h H not to exceed its generation capacity p max i (h). The generation profile p i := {p i (h)} h H results in monetary cost of C i (p i ) for the system where C i (p i ) is some increasing function that maps load profile to positive reals R +. A storage unit i S can charge/discharge its battery by s i (h) amounts of power not to exceed its maximum charge/discharge amount s max i (h). When s i (h) >, we say that the storage unit charges its battery otherwise we say that it discharges its battery. The battery s state of charge at time h H is denoted by q i (h) and is modeled by the following difference equation, q i (h) = q i (h 1) + αs i (h) (1) where α is the coefficient converting kw units into Ah. The state of charge cannot exceed qi max amounts at any point in time due to the specifications of the battery, that is, q i (h) qi max for any h H. The initial state of charge level q i () is assumed to be given. We use Ω Gi for the set of feasibility constraints of power generation of device i N, that is, p i Ω Gi. Similarly, we use Ω Si to denote the set of feasibility constraints for the storage profile of i N, s i := {s i (h)} h H Ω Si. Given the constraints regarding device specifications the EDP chooses generation p := {p i } and storage s := {s i } profiles that will match supply and demand while minimizing cost, C i (p i ) () s.t. min p,s p i s i = d (3) p i Ω Gi, s i Ω Si for all i N. (4) The supply demand matching constraint (3) couples the decision variables of the devices. We denote the
3 optimal power and storage profiles to the above equation with p and s, respectively. Next, we provide a fully decentralized algorithm based on ADMM that converges to the optimal power generation and storage values. III. DECENTRALIZED EDP We first present a general overview of the ADMM algorithm see [14] for a more detailed explanation and then restructure the EDP such that the application of the ADMM algorithm emits a decentralized solution. A. ADMM Algorithm Define the variables x X R n, z Z R m. The generic form of the problems that the ADMM algorithm provides decentralized solutions to contain objective functions f( ) : X R and h( ) : Z R and linear equality constraints, min f(x) + h(z) (5) x X,z Z s.t. Ax + Bz = c where A R k n, B R k m and c R k. Note that the objective of the above optimization problem has the form that is separable with respect to its variables while its constraint couples the variables. The ADMM operates on the augmented Lagrangian defined as L ρ (x, z, λ) :=f(x) + h(z) + λ T (Ax + Bz c) + ρ Ax + Bz c (6) where λ R k is the price associated with violation of the equality constraint and ρ > is a penalty parameter that penalizes infeasibility of (5). The algorithm consists of a coordinate descent in the primal variables in an alternating manner followed by an ascent step in the price variable, x t+1 := argmin L ρ (x, z t, λ t ) (7) x z t+1 := argmin L ρ (x t+1, z, λ t ) (8) z λ t+1 := λ t + ρ(ax t+1 + Bz t+1 c) (9) The minimization of the augmented Lagrangian with respect to x at iteration t + 1 requires values of other variables from iteration t, namely, primal variable z t and price variable λ t in (7). The minimization of the augmented Lagrangian with respect to z at iteration t+1 requires the updated primal variable x t+1 and the price variable λ t in (8). The order of primal updates can be interchanged, that is, we can update z first and then x; however, the second variable that is updated still requires the updated value of the first primal variable. The dual ascent step at iteration t + 1 in (9) uses the updated primal variables x t+1 and z t+1 to ascent with step size equal to the penalty parameter ρ. While the form of the EDP in ()-(4) belongs to the type of problems that ADMM is designed for, the ADMM algorithm presented above is not a fully decentralized update. It is possible to see this from the discussion above where each primal variable update requires previously updated primal variables. This means that devices need to receive the most recent updates from all of the devices that updated before them in order to update their primal variables. Furthermore the dual ascent step (9) requires a centralized coordinator that has access to network-wide updated primal variables. In the next section we introduce a communication network and present a decentralized solution to EDP utilizing the dual consensus ADMM (DC-ADMM) presented in []. B. d-edp using ADMM Consider a connected network with set of nodes corresponding to devices in the grid N and an edge set E where pair of nodes (i, j) belong to E if i can send information to and receive information from j, i j, that is, E := {(i, j) : i j, i N, j N }. The neighborhood set of i is the set of agents from which agent i can receive information from N i := {(j, i) E : j N }. We adopt the convention that device i is not the neighbor of itself, that is, i / N i. We relax the coupling equality constraint (3) of the EDP problem with the price variables λ to obtain the following Lagrangian L({p i }, {s i }, λ) = ( C i (p i ) + λ T (p i s i ) λ T d/n ). (1) The dual function for the relaxed EDP is obtained by maximizing the negative of the above Lagrangian with respect to the primal variables which we can do separately for each device, g(λ):= max p i Ω Gi s i Ω Si ( C i (p i )+λ T (p i s i ) λ T d/n ). (11) We define the local dual function resulting from maximization of local variables of i as g i (λ) := max pi Ω Gi,s i Ω Si C i (p i ) λ T (p i s i ) and rewrite the dual function above as a sum of local dual functions, g(λ) := g i (λ) + λ T d/n. (1) The minimization of (1) with respect to λ yields the optimal price variables and the optimal primal variables when the original EDP problem in () has zero duality gap. However, λ is a global variable associated with the equality constraint in () and solution to (1) requires information from all devices. In order to solve the dual problem above in a decentralized manner we introduce
4 the local copies of the price variable, that is, i s local copy of λ is λ i. Then we can equivalently represent the minimization of (1), min λ g(λ) in terms of local copies of the price variable given a connected network, min g i (λ i ) λ T i d/n (13) λ 1,...,λ N,{γ ij} s.t. λ i = γ ij, λ j = γ ij for all j N i, i N (14) where γ ij are the local auxiliary variables. Note that in a connected network the solution of the above optimization is equivalent to solving min λ g(λ). Further observe that the above optimization is of the form in (5) which implies we can derive an algorithm using the same arguments as in Section III-A. We first form the augmented Lagrangian for the above problem using the dual variables u ij and v ij for the consensus constraints in (14) with the penalty constant ρ >, L ρ (λ 1,..., λ N, {γ ij, u ij, v ij })= g i (λ i ) λ T i d/n + u ij (λ i γ ij ) + v ij (γ ij λ j ) + ρ λ i γ ij + λ j γ ij (15) Define the set of price variables of all the agents λ := {λ i } and the set of all auxiliary variables as γ = {γ ij } j Ni,. When we apply the ADMM steps in (7)- (9) to the above augmented Lagrangian, we have the following steps at iteration t: λ t+1 = argmin L ρ ( λ, γ t, {u ijt, v ijt }) (16) λ γ t+1 = argmin L ρ ( λ t+1, γ, {u ijt, v ijt }) (17) γ u ijt+1 = u ijt + ρ ( ) λ it+1 γ ijt+1 (18) v ijt+1 = v ijt + ρ ( ) λ jt+1 γ ijt+1 (19) Starting from the auxiliary variable updates (17) and primal variable updates (16) and using their decomposable structure into i N i and N quadratic subproblems, respectively, [1] argue inductively that the set of updates above (16)-(19) simplifies and decouples into the following updates y it+1 = y it + ρ λ it λ jt () λ it+1 = argmin g i (λ i ) λ T i d/n + yjt+1λ T i λ i + ρ λ i λ it + λ jt (1) where we define y it := u ijt + v ijt when initial dual variables are all zero u ij = and v ij =. As noted in [], the minimization in (1) is actually a minmax optimization problem that implicitly includes the maximization of the primal variables p i and s i in the dual function g i ( ). When C i ( ) is convex and strong duality holds for the EDP ()-(4), they provide the following closed form solution for the min-max problem in (1) using the minimax theorem in [15, Proposition.6.], λ it+1 = 1 N i ( (λ it + λ jt ) + 1 ρ (p it+1 s it+1 1 N d) 1 ρ y it+1), () (p it+1, s it+1 ) = argmin C i (p i ) + ρ p i Ω Gi,s i Ω Si 4 N i 1 (p i s i 1N ) ρ d 1 ρ y it+1 + (λ it + λ jt ). (3) Observe that the price variable update in () requires the updated primal variables of time t + 1 from (3). Hence device i first updates the primal variables and then the price variable. The updates for device i are summarized in Algorithm 1. Algorithm 1 solves the dual EDP in a fully decentralized manner. The initialization consists of setting dual variables y i to zero. Other variables p i, s i, λ i can be arbitrarily set. The algorithm at time t starts by sending local price variables λ it and observing neighbors local price variables λ Nit := {λ jt : j N i }. The algorithm is synchronous in that device i requires prices of all of its neighbors from iteration t. Then in step device i averages the neighbors price variables to update its dual variable (). Along with observed price variables λ Nit, the dual variables y it+1 are used to update the primal power and storage variables following (3) in step 3. Finally in step 4 device i updates its local price variable λ it+1 using all of the observed and updated variables according to (). The algorithm continues by moving Algorithm 1 d-edp updates at device i Require: Initialize primal variables p i, s i, λ i and dual variables y i =. Set t =. Require: Determine stopping condition, e.g., maximum number of steps T N +. while Stopping Condition Not Reached do [1] Transmit λ it and receive λ jt from j N i. [] Compute y it+1 using (). [3] Update primal variables p it+1, s it+1 using (3). [4] Compute prices λ it+1 using (). [5] Set t= t+1. end while
5 the iteration step forward. The derivation above is worth retracing. The primal EDP problem in ()-(4) contains a power balance constraint that couples the variables of all the devices. Therefore, we consider a decentralized solution operating on the relaxation of the power balancing constraint. The relaxed problem in (1), i.e., the dual EDP entails a global price variable λ that is associated with the price of power imbalance. We then write an equivalent representation of the dual EDP as a dual consensus EDP in (13)-(14) where each device carries their local copy of the price. Applying the ADMM algorithm to the dual consensus EDP yields a fully decentralized solution. C. Convergence Properties of the d-edp According to Theorem in [] the iterations of Algorithm 1 asymptotically converge to optimal variables when the problem is convex, the network (N, E) is connected and strong duality holds. We assume that the cost C i (p i ) is convex in p i Ω Gi and the network is connected for the centralized EDP in ()-(4). A sufficient condition for strong duality is Slater s condition which requires that there exists a strictly feasible point. For a set of linear equality and inequality, this condition relaxes to the existence of a feasible point. For the EDP problem in ()-(4) the constraints are all affine. Furthermore, we assume that the energy system is connected to the grid which can satisfy demand at all times. This assumption makes sure that there exists a feasible point {p i, s i } that satisfies power balance constraint (3) and other device specific constraints (4). Consequently, the assumptions of Theorem in [] are all met and the iterations of Algorithm 1 converge to the optimal, that is, p it p i, s it s i. The d-edp is such that the solution yields optimal local prices along with the optimal power and storage profiles, that is, λ it λ. The local optimal prices can then be used to design smart pricing policies such as real time pricing for demand response management [16]. IV. NUMERICAL IMPLEMENTATION OF D-EDP We consider a microgrid with a single battery (B), a diesel generator (DG) and a photo-voltaic () generator and a connection to the grid (G). Including the grid the number of devices is equal to N = 4. The operation cycle is 4 hours with hourly scheduling decisions, that is, H = 4. The battery has a storage capacity of kwh, qb max = kwh. We assume charging or discharging the battery has no cost and roundtrip efficiency is 1%. The diesel generator maximum power is.kw p max DG =.kw. The cost of dispatching p DG(h) at time h is equal to zero when the power dispatched is zero, otherwise it is a linear function of p DG (h) > with slope a =.5$/kW and a positive y-intercept equal to b, that is, C DG (p i (h)) = b1(p i (h) > ) + ap i (h) where 1( ) is an indicator function [17]. Note that the cost function is discontinuous when the intercept is strictly positive, b >. In this case the problem is formulated as a mixed integer programming problem which is not convex, that is, violates the assumptions of convergence in Section III-C. For the numerical analysis we allow the convexity assumption to be violated and set b =.5$. The generator profile p max P V is determined a priori based on collected data. This profile determines the amount available for dispatch at each hour and is shown by the dashed line in Fig. 4. There are no costs for using the power. We assume that the grid can supply all the power that the microgrid can require. The grid electricity cost is set according to a time-of-use rate tariff. In this tariff, the baseline price.3 jumps at time h = 8 to.1 $/kw, that is, price G (h) = (h 8) $/kw. Note that the grid is cheaper than DG until h = 8 and afterwards it is cheaper given the power dispatched is less than 1kW. The demand profile is determined from the real load data of a commercial building over the operation cycle and is shown by the solid line in Fig. 4. All devices know their individual specifications and the forecasted demand profile. The d-ems has been implemented in the Java Agent Development Framework (JADE) which is a platform for decentralized applications in Java [18]. We test the convergence properties of the algorithm in four generic networks, namely line, star, ring and fully connected network as depicted in Fig. 1. Fig. shows the total cost convergence for each communication network setup. The results show that the line network is the slowest to converge and the fully connected network is the first to converge. This indicates that the convergence rate degrades with the increasing diameter of the network. All network structures converge in less than one thousand iterations to a near optimal cost computed by solving the centralized EDP. We further observe in Fig. 3 near feasibility is achieved at least an order of magnitude earlier than convergence to optimal value in less than 5 iterations. The computation time of 15 iterations takes less than one minute. While the network structure has an influence on the convergence rate, a decentralized solution will be more computationally effective than a centralized solution as the number of devices in the network grows. That is, a d-ems will scale better in comparison to a centralized solver [1]. Fig. 4 shows the optimal power and storage profiles of the devices. In the first seven hours the source of total dispatch is the G. Note that the G power exceeds the demand in this period because extra power is used to charge the battery. The reason for charging the battery is the price increase in the G power by hour eight. Note that the demand between the hours h = 8 and h = 1 cannot
6 DG DG G B G G B DG B Fig. 1. Generic line, star and ring networks of four devices. Each device can only communicate with its neighbors. Power (kwh) G DG B Demand Max Total Cost Iteration Number LINE STAR RING FULL OPT Fig.. Convergence to optimal cost for generic networks. Devices apply the d-edp in Algorithm 1. Line network converges to the optimal value the slowest while the fully connected network is the fastest to converge. All networks converge within 1 iterations. fully be matched by the available generation, p max P V indicated by the dashed line. During these hours and stored power is used together to meet the demand. After hour h = 11 the available is used to charge the battery. The stored power between the hours h = 1 to h = 19 is used to meet demand in the final hours h = to h = 4 together with the DG power. Note here that even though DG power is more expensive than G power Absolute Supply Demand Iteration Number LINE STAR RING FULL Fig. 3. The evolution of the supply and demand infeasibility gap. The x-axis is the iteration number t and the y-axis is the h (p it(h) s it (h)) d(h). All network types achieve feasibility within 5 iterations which implies feasibility is achieved an order of magnitude faster than convergence to optimality Time (Hours) Fig. 4. Optimal power and storage profile for the device network in Fig. 1. The solid line represents the demand profile. The dashed line is the availability over the horizon p max P V. The power dispatch bars are color coded according to the devices (G,, DG and B) as shown in the legend. In the first 7 time slots, grid power is used to charge the battery. The stored power is used for matching demand together with available power in times h = {8, 9} when the grid price is increased. Between time slots 1 h 18 the power is used to supply demand and store power in the battery. The stored power is then discharged when power is not sufficient to meet the demand after h =. Diesel dispatches power after time h = 1. This is due to the fact that storage capacity of the battery device is not enough to meet the demand from h = to h = H H Fig. 5. Receding horizon control for the operating horizon H. At time h = the system solves the EDP in ()-(4) for the optimal power and storage schedules p i and s i. Then the devices apply the first element of the optimal schedule p i (1) and s i (1). In the next time step devices plan for the next H time steps and the process continues. for the amounts required at time h = to h = 4, the algorithm uses DG power to balance power. This implies that the algorithm converges to a suboptimal point which can happen when the problem is not convex. The optimal schedule in Fig. 4 is obtained based on the load profile d and generation p max P V forecasted at hour h =. As time progresses these forecasts might be updated for the remaining horizon. Furthermore there might be changes to the device network as new devices are added or devices leave due to failure or maintenance. We would like the d-ems to be adaptable to changes in predictions and robust to changes in device network. In order to incorporate prediction changes we use the receding horizon control method that allows for forecast updates at each step of the planning horizon. We further propose a communication protocol in the next section that allows for the decentralized implementation of the receding horizon. In Section VI, we provide a decentralized protocol that allows the system to handle device failures and additions on the fly. V. DECENTRALIZED RECEDING HORIZON CONTROL Given a time horizon H a centralized receding horizon control for EDP solves the optimization in () for the
7 whole horizon to obtain power and storage profiles but only uses the first element of the optimal profile for step h =. Before h = 1, the EDP is solved for hours h = 1 to H +1 and the scheduled element for h = 1 is applied and the horizon is propagated by one again. This process is schematically shown in Fig. 5. In order to implement the decentralized receding horizon control, we use the d-edp algorithm for each hour and furthermore we require a communication protocol that synchronizes devices for starting the updates for the next time step. Note that the updates in d-edp require only local forecasts and device specifications. Hence each device can update its forecasts locally. We assume the demand profile is forecasted by a load device and is communicated to all the devices. The proposed communication protocol for device i is detailed in Algorithm. Each device starts the algorithm by updating its local forecasts for the operating horizon. Then each device sends a request to start planning for the horizon in step and waits to receive requests from all their neighbors in step 3. When all neighboring devices send their requests, device i starts the d-edp Algorithm 1 in step 4. Once Algorithm 1 is complete, device i applies the first element of the scheduled power and storage profile in step 5. When the power is dispatched, the device propagates its time in step 6 and goes back to step 1 to start planning for the the current time horizon. Algorithm Decentralized receding horizon at device i Require: Initialize time h =. Require: Initial demand forecast profile d and local forecasts if applicable. loop [1] Update local forecasts for h to H + h. [] Request neighbors to start planning. [3] Wait until all neighbors also send their requests. [4] Do d-edp (Algorithm 1). [5] Apply the first element of p it (h) and s it (h). [6] Advance time h = h + 1. end loop VI. ADAPTABILITY TO NETWORK CHANGES Changes to the network can occur when a device leaves or a new device joins. A device removal implies that the node associated with the device and all the edges to and from that node are removed. When a device is added, we assume that the new device connects with at least one existing device. The communication protocols to handle removal and addition differ as removal of a node can be more critical than an addition. When a device is removed all the devices interrupt their operations and update their network information, that is, their neighborhood list N i and the total number of DG1 DG Fig. 6. A six device communication network composed of two storage devices, two diesel generators, one and one Grid. There are two scenarios for the evolution of the communication network. In the first the communication network remains the same and in the second the storage device B is removed from the network at h =. devices N. Then they restart Algorithm from step 1. When a device is added, devices update their network information at the beginning of the next time step, that is, after advancing time in step 6. This means that the new device is admitted to the system in the next time step. The new device updates its network information upon connection to the network and starts from step 1 and waits at step 3 for his neighbors to send requests. For a centralized solution to the EDP a device removal or addition implies reconfiguring the optimization problem by removing or adding new variables, constraints and cost functions. Reformulating the optimization problem can be cumbersome when the network is changing frequently. The steps of the decentralized receding horizon method described in Algorithm require that the devices only need their neighbor list and the total number of devices in the system. A. Device failure numerical example We consider a microgrid containing two batteries (B, B1), two diesel generators (DG, DG1), one generator () and the grid (G) see Fig. 6. The specifications of each device and the load profile are the same as in Section IV. We compare the results of the d-ems between two scenarios. In the first scenario, all of the devices run without any failures. In the second storage device B fails during step 4 in Algorithm at time h =. Fig. 7(a) depicts the final generation and storage profiles of the no failure scenario whereas Fig. 7(b) shows the profiles of the scenario with failure. We observe that the system adapts to failure by increasing the state of charge levels of the remaining battery. Note that when there is no failure the generation in 1 h 19 are used to charge batteries which are then discharged at times h 4 to match demand. Observe that when there is failure, the storage capacity of the remaining battery is not sufficient to match all of the demand for h 4. As a result, the system uses diesel generators to match demand. G B B1
8 Power (kwh) Power (kwh) G DG + DG1 B1 B Demand Max Time (Hours) (a) G DG + DG1 B1 B Demand Max Time (Hours) Fig. 7. Optimal power and storage profile for the device network in Fig. 6 for the two scenarios. In (a), the network remains the same over the horizon. In (b), the storage device B fails and the system operates with single storage device. Compared to (a), in (b) the available storage capacity qb1 max limits the system s ability to balance demand for h using solely stored renewable power. As a result, diesel generator power is needed for balancing power. (b) VII. CONCLUSION We proposed a decentralized energy management system (d-ems) that solves an economic dispatch problem with generator and storage units for the planning horizon in a fully decentralized manner using an ADMM algorithm in which each device iteratively makes decisions based on its specifications, demand profile prediction and neighbors price variables. Based on existing results we argued that the proposed algorithm converges to the optimal under the assumptions that the network is connected and each device has convex cost functions. We provided numerical experiments that associates the convergence rate with the structure of the network. The proposed d-ems further implements a receding horizon control via a communication protocol that allows for updates to the demand profile forecasts and to the device specifications of EDP as time progresses. Furthermore, we incorporated a protocol to the d-ems that handles changes to the device network on the fly. We showed that the d-edp algorithm only requires the number of neighboring devices and the total number of devices to reinitialize with respect to the changes in the device network. Finally, we provided a numerical example where a storage device fails and the system adapts by utilizing the remaining storage device for the remainder of the horizon. REFERENCES [1] A. Hooshmand, B. Asghari, and R.K. Sharma. Experimental demonstration of a tiered power management system for economic operation of grid-tied microgrids. IEEE Trans. Sustainable Energy,, 5(4): , Oct. 14. [] T. Chang, M. Hong, and X. Wang. Multi-agent distributed optimization via inexact consensus admm. arxiv preprint arxiv:14.665, 14. [3] A. Cherukuri, S. Martinez, and J. Cortes. Distributed, anytime optimization in power-generator networks for economic dispatch. In 5th IEEE American Control Conference (ACC), pages , June 14. [4] L. Xiao and S. Boyd. Optimal scaling of a gradient method for distributed resource allocation. Journal of optimization theory and applications, 19(3): , 6. [5] A. Simonetto, T. Keviczky, and M. Johansson. A regularized saddle-point algorithm for networked optimization with resource allocation constraints. In IEEE 51st Annual Conference on Decision and Control (CDC), pages , December 1. [6] S. Kar and G. Hug. Distributed robust economic dispatch in power systems: A consensus+ innovations approach. In IEEE Power and Energy Society General Meeting, pages 1 8, July 1. [7] Z. Zhang and M. Y. Chow. Incremental cost consensus algorithm in a smart grid environment. In IEEE Power and Energy Society General Meeting, pages 1 6, July 11. [8] V. Loia and A. Vaccaro. Decentralized economic dispatch in smart grids by self-organizing dynamic agents. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(4):397 48, 14. [9] M. Zhu and S. Martinez. On distributed convex optimization under inequality and equality constraints. IEEE Transactions on Automatic Control, 57(1): , 1. [1] M. Kraning, E. Chu, J. Lavaei, and S. Boyd. Dynamic network energy management via proximal message passing. Foundations and Trends in Optimization, 1():7 1, 13. [11] I. Schizas, A. Ribeiro, and G. Giannakis. Consensus in ad hoc wsns with noisy links - part i: distributed estimation of deterministic signals. IEEE Trans. Signal Process., 56(1): , January 8. [1] G. Mateos, J. A. Bazerque, and G.B. Giannakis. Distributed sparse linear regression. IEEE Trans. Signal Process., 58(1):56 57, 1. [13] W. Shi, Q. Ling, K. Yuan, and G. Wu. On the linear convergence of the admm in decentralized consensus optimization. IEEE Trans. Signal Process., 6(7): , 14. [14] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3:1 1, 11. [15] D.P. Bertsekas, A. Nedic, and Ozdaglar. A.E. Convex analysis and optimization. Athena Scientific, Cambridge, Massachusetts, 7. [16] C. Eksin, H. Deliç, and A. Ribeiro. Distributed demand side management for heterogeneous rational consumers in smart grids with renewable sources. In Proc. Int. Conf. Acoustics Speech Signal Process., Florence, Italy, May 14. [17] A. Hooshmand, B. Ashgari, and R. Sharma. Efficiency-driven control of dispatch able sources and storage units in hybrid energy systems. In 5th IEEE American Control Conference (ACC), pages , June 14. [18] F.L. Bellifemine, G. Caire, and Greenwood D. Developing Multi- Agent Systems with JADE. John Wiley & Sons, 7.
Lecture 4 Duality and Decomposition Techniques
Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian
More informationDistributed Alternating Direction Method of Multipliers
Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,
More informationSurrogate Gradient Algorithm for Lagrangian Relaxation 1,2
Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.
More informationDISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling
DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS Yuao Cheng, Houfeng Huang, Gang Wu, Qing Ling Department of Automation, University of Science and Technology of China, Hefei, China ABSTRACT
More informationThe Alternating Direction Method of Multipliers
The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex
More informationConvex Optimization and Machine Learning
Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction
More informationRandom Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization
Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Cun Mu, Asim Kadav, Erik Kruus, Donald Goldfarb, Martin Renqiang Min Machine Learning Group, NEC Laboratories America
More informationA parallel implementation of the ADMM algorithm for power network control
Aalto University School of Science Tuomas Rintamäki A parallel implementation of the ADMM algorithm for power network control The document can be stored and made available to the public on the open internet
More informationJavad Lavaei. Department of Electrical Engineering Columbia University
Graph Theoretic Algorithm for Nonlinear Power Optimization Problems Javad Lavaei Department of Electrical Engineering Columbia University Joint work with: Ramtin Madani, Ghazal Fazelnia and Abdulrahman
More informationNonlinear Programming
Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,
More informationCOMS 4771 Support Vector Machines. Nakul Verma
COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron
More informationLagrangian Relaxation: An overview
Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction
More informationAVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang
AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION Jialin Liu, Yuantao Gu, and Mengdi Wang Tsinghua National Laboratory for Information Science and
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationAlternating Direction Method of Multipliers
Alternating Direction Method of Multipliers CS 584: Big Data Analytics Material adapted from Stephen Boyd (https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf) & Ryan Tibshirani (http://stat.cmu.edu/~ryantibs/convexopt/lectures/21-dual-meth.pdf)
More informationSparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual
Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal
More informationDecentralized Low-Rank Matrix Completion
Decentralized Low-Rank Matrix Completion Qing Ling 1, Yangyang Xu 2, Wotao Yin 2, Zaiwen Wen 3 1. Department of Automation, University of Science and Technology of China 2. Department of Computational
More informationA duality-based approach for distributed min-max optimization with application to demand side management
1 A duality-based approach for distributed min- optimization with application to demand side management Ivano Notarnicola 1, Mauro Franceschelli 2, Giuseppe Notarstefano 1 arxiv:1703.08376v1 [cs.dc] 24
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationAugmented Lagrangian Methods
Augmented Lagrangian Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Augmented Lagrangian IMA, August 2016 1 /
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - C) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationGate Sizing by Lagrangian Relaxation Revisited
Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007
More informationThe Alternating Direction Method of Multipliers
The Alternating Direction Method of Multipliers Customizable software solver package Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein April 27, 2016 1 / 28 Background The Dual Problem Consider
More informationCOLLABORATIVE RESOURCE ALLOCATION OVER A HYBRID CLOUD CENTER AND EDGE SERVER NETWORK *
Journal of Computational Mathematics Vol.35, No.4, 2017, 421 436. http://www.global-sci.org/jcm doi:10.4208/jcm.1608-m2016-0561 COLLABORATIVE RESOURCE ALLOCATION OVER A HYBRID CLOUD CENTER AND EDGE SERVER
More informationDLM: Decentralized Linearized Alternating Direction Method of Multipliers
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 63, NO. 15, AUGUST 1, 2015 4051 DLM: Decentralized Linearized Alternating Direction Method of Multipliers Qing Ling, Wei Shi, Gang Wu, Alejro Ribeiro Abstract
More informationAugmented Lagrangian Methods
Augmented Lagrangian Methods Mário A. T. Figueiredo 1 and Stephen J. Wright 2 1 Instituto de Telecomunicações, Instituto Superior Técnico, Lisboa, Portugal 2 Computer Sciences Department, University of
More informationThe flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R
Journal of Machine Learning Research 6 (205) 553-557 Submitted /2; Revised 3/4; Published 3/5 The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R Xingguo Li Department
More informationPRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction
PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem
More informationDelay-minimal Transmission for Energy Constrained Wireless Communications
Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu
More informationDistributed non-convex optimization
Distributed non-convex optimization Behrouz Touri Assistant Professor Department of Electrical and Computer Engineering University of California San Diego/University of Colorado Boulder AFOSR Computational
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationTensor Sparse PCA and Face Recognition: A Novel Approach
Tensor Sparse PCA and Face Recognition: A Novel Approach Loc Tran Laboratoire CHArt EA4004 EPHE-PSL University, France tran0398@umn.edu Linh Tran Ho Chi Minh University of Technology, Vietnam linhtran.ut@gmail.com
More informationRecursive column generation for the Tactical Berth Allocation Problem
Recursive column generation for the Tactical Berth Allocation Problem Ilaria Vacca 1 Matteo Salani 2 Michel Bierlaire 1 1 Transport and Mobility Laboratory, EPFL, Lausanne, Switzerland 2 IDSIA, Lugano,
More informationME 555: Distributed Optimization
ME 555: Distributed Optimization Duke University Spring 2015 1 Administrative Course: ME 555: Distributed Optimization (Spring 2015) Instructor: Time: Location: Office hours: Website: Soomin Lee (email:
More informationSupport Vector Machines. James McInerney Adapted from slides by Nakul Verma
Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake
More informationDistributed and Large-Scale Optimization. Abdulrahman Kalbat
Distributed and Large-Scale Optimization Abdulrahman Kalbat Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA
More informationDistributed Optimization of Continuoustime Multi-agent Networks
University of Maryland, Dec 2016 Distributed Optimization of Continuoustime Multi-agent Networks Yiguang Hong Academy of Mathematics & Systems Science Chinese Academy of Sciences Outline 1. Background
More informationConvex optimization algorithms for sparse and low-rank representations
Convex optimization algorithms for sparse and low-rank representations Lieven Vandenberghe, Hsiao-Han Chao (UCLA) ECC 2013 Tutorial Session Sparse and low-rank representation methods in control, estimation,
More informationEfficient Iterative LP Decoding of LDPC Codes with Alternating Direction Method of Multipliers
Efficient Iterative LP Decoding of LDPC Codes with Alternating Direction Method of Multipliers Xiaojie Zhang Samsung R&D America, Dallas, Texas 758 Email: eric.zhang@samsung.com Paul H. Siegel University
More informationJavad Lavaei. Graph-theoretic Algorithm for Nonlinear Power Optimization Problems. Department of Electrical Engineering Columbia University
Graph-theoretic Algorithm for Nonlinear Power Optimization Problems Javad Lavaei Department of Electrical Engineering Columbia University Joint work with Ramtin Madani, Somayeh Sojoudi, Ghazal Fazelnia
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationx ji = s i, i N, (1.1)
Dual Ascent Methods. DUAL ASCENT In this chapter we focus on the minimum cost flow problem minimize subject to (i,j) A {j (i,j) A} a ij x ij x ij {j (j,i) A} (MCF) x ji = s i, i N, (.) b ij x ij c ij,
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationConstrained optimization
Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:
More informationConvex Optimization: from Real-Time Embedded to Large-Scale Distributed
Convex Optimization: from Real-Time Embedded to Large-Scale Distributed Stephen Boyd Neal Parikh, Eric Chu, Yang Wang, Jacob Mattingley Electrical Engineering Department, Stanford University Springer Lectures,
More informationParallel and Distributed Graph Cuts by Dual Decomposition
Parallel and Distributed Graph Cuts by Dual Decomposition Presented by Varun Gulshan 06 July, 2010 What is Dual Decomposition? What is Dual Decomposition? It is a technique for optimization: usually used
More informationDistributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Stephen Boyd MIIS Xi An, 1/7/12 source: Distributed Optimization and Statistical Learning via the Alternating
More informationAdvanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material
More informationAn R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation
An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation Xingguo Li Tuo Zhao Xiaoming Yuan Han Liu Abstract This paper describes an R package named flare, which implements
More informationA primal-dual framework for mixtures of regularizers
A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland
More informationChapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More informationSelected Topics in Column Generation
Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationLagrangean Methods bounding through penalty adjustment
Lagrangean Methods bounding through penalty adjustment thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline Brief introduction How to perform Lagrangean relaxation Subgradient techniques
More informationOptimal network flow allocation
Optimal network flow allocation EE384Y Project intermediate report Almir Mutapcic and Primoz Skraba Stanford University, Spring 2003-04 May 10, 2004 Contents 1 Introduction 2 2 Background 2 3 Problem statement
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationWhen does a digraph admit a doubly stochastic adjacency matrix?
When does a digraph admit a doubly stochastic adjacency matrix? Bahman Gharesifard and Jorge Cortés Abstract Digraphs with doubly stochastic adjacency matrices play an essential role in a variety of cooperative
More informationNetwork Lasso: Clustering and Optimization in Large Graphs
Network Lasso: Clustering and Optimization in Large Graphs David Hallac, Jure Leskovec, Stephen Boyd Stanford University September 28, 2015 Convex optimization Convex optimization is everywhere Introduction
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationSnapVX: A Network-Based Convex Optimization Solver
Journal of Machine Learning Research 18 (2017) 1-5 Submitted 9/15; Revised 10/16; Published 2/17 SnapVX: A Network-Based Convex Optimization Solver David Hallac Christopher Wong Steven Diamond Abhijit
More informationModified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with
Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with Parallelization in Non-hierarchical Analytical Target Cascading Yongsu Jung Department of Mechanical Engineering,
More informationConvex Optimization. Stephen Boyd
Convex Optimization Stephen Boyd Electrical Engineering Computer Science Management Science and Engineering Institute for Computational Mathematics & Engineering Stanford University Institute for Advanced
More informationDual Subgradient Methods Using Approximate Multipliers
Dual Subgradient Methods Using Approximate Multipliers Víctor Valls, Douglas J. Leith Trinity College Dublin Abstract We consider the subgradient method for the dual problem in convex optimisation with
More informationColumn Generation Method for an Agent Scheduling Problem
Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,
More informationImproving Connectivity via Relays Deployment in Wireless Sensor Networks
Improving Connectivity via Relays Deployment in Wireless Sensor Networks Ahmed S. Ibrahim, Karim G. Seddik, and K. J. Ray Liu Department of Electrical and Computer Engineering, and Institute for Systems
More informationDepartment of Electrical and Computer Engineering
LAGRANGIAN RELAXATION FOR GATE IMPLEMENTATION SELECTION Yi-Le Huang, Jiang Hu and Weiping Shi Department of Electrical and Computer Engineering Texas A&M University OUTLINE Introduction and motivation
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More informationCONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year
CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING
More information11 Linear Programming
11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed
More informationLecture 18: March 23
0-725/36-725: Convex Optimization Spring 205 Lecturer: Ryan Tibshirani Lecture 8: March 23 Scribes: James Duyck Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not
More informationNetwork Topology Control and Routing under Interface Constraints by Link Evaluation
Network Topology Control and Routing under Interface Constraints by Link Evaluation Mehdi Kalantari Phone: 301 405 8841, Email: mehkalan@eng.umd.edu Abhishek Kashyap Phone: 301 405 8843, Email: kashyap@eng.umd.edu
More informationLecture 19: Convex Non-Smooth Optimization. April 2, 2007
: Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials
More information56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997
56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 Answer #1 and any five of the remaining six problems! possible score 1. Multiple Choice 25 2. Traveling Salesman Problem 15 3.
More informationRegularization and Markov Random Fields (MRF) CS 664 Spring 2008
Regularization and Markov Random Fields (MRF) CS 664 Spring 2008 Regularization in Low Level Vision Low level vision problems concerned with estimating some quantity at each pixel Visual motion (u(x,y),v(x,y))
More informationAn R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation
An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation Xingguo Li Tuo Zhao Xiaoming Yuan Han Liu Abstract This paper describes an R package named flare, which implements
More informationProject Proposals. Xiang Zhang. Department of Computer Science Courant Institute of Mathematical Sciences New York University.
Project Proposals Xiang Zhang Department of Computer Science Courant Institute of Mathematical Sciences New York University March 26, 2013 Xiang Zhang (NYU) Project Proposals March 26, 2013 1 / 9 Contents
More informationIntroduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer
More informationA Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems
A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département
More informationMATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS
MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationA Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows
1 A Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows Phuong Luu Vo, Nguyen H. Tran, Choong Seon Hong, *KiJoon Chae Kyung H University,
More informationInteger Programming ISE 418. Lecture 7. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More informationConvex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)
Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More
More informationNatural Language Processing
Natural Language Processing Classification III Dan Klein UC Berkeley 1 Classification 2 Linear Models: Perceptron The perceptron algorithm Iteratively processes the training set, reacting to training errors
More informationA proximal center-based decomposition method for multi-agent convex optimization
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 A proximal center-based decomposition method for multi-agent convex optimization Ion Necoara and Johan A.K.
More informationIntroduction to Optimization
Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min
More informationInfinite Time Optimal Control of Hybrid Systems with a Linear Performance Index
Infinite Time Optimal Control of Hybrid Systems with a Linear Performance Index Mato Baotić, Frank J. Christophersen, and Manfred Morari Automatic Control Laboratory, ETH Zentrum, ETL K 1, CH 9 Zürich,
More informationPenalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps
Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Björn Geißler, Antonio Morsi, Lars Schewe, Martin Schmidt FAU Erlangen-Nürnberg, Discrete Optimization
More informationConvexity Theory and Gradient Methods
Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality
More informationA Distributed Algorithm for Random Convex Programming
A Distributed Algorithm for Random Convex Programming Luca Carlone, Vaibhav Srivastava, Francesco Bullo, and Giuseppe C. Calafiore Abstract We study a distributed approach for solving random convex programs
More informationElectric Power Systems Research
Electric Power Systems Research 110 (2014) 45 54 Contents lists available at ScienceDirect Electric Power Systems Research j o ur na l ho mepage: www.elsevier.com/locate/epsr Multi-agent control of community
More informationOptimal Reverse Carpooling Over Wireless Networks - A Distributed Optimization Approach
1 Optimal Reverse Carpooling Over Wireless Networks - A Distributed Optimization Approach Ali ParandehGheibi, Asu Ozdaglar, Michelle Effros, Muriel Médard parandeh@mit.edu, asuman@mit.edu, effros@caltech.edu,
More informationMinimum-Polytope-Based Linear Programming Decoder for LDPC Codes via ADMM Approach
Minimum-Polytope-Based Linear Programg Decoder for LDPC Codes via ADMM Approach Jing Bai, Yongchao Wang, Member, IEEE, Francis C. M. Lau, Senior Member, IEEE arxiv:90.07806v [cs.it] 23 Jan 209 Abstract
More information4 Integer Linear Programming (ILP)
TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that
More informationLinear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS
Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all
More informationISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints
ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss
More information