Abstract. Index Terms

Size: px
Start display at page:

Download "Abstract. Index Terms"

Transcription

1 Clustering in Distributed Incremental Estimation in Wireless Sensor Networks Sung-Hyun Son, Mung Chiang, Sanjeev R. Kulkarni, Stuart C. Schwartz Abstract Energy efficiency, low latency, high estimation accuracy, and fast convergence are important goals in distributed incremental estimation algorithms for sensor networks. One approach that adds flexibility in achieving these goals is clustering. In this paper, the framework of distributed incremental estimation is extended by allowing clustering amongst the nodes. Among the observations made is that a scaling law exists where the estimation accuracy increases proportionally with the number of clusters. The distributed parameter estimation problem is posed as a convex optimization problem involving a social cost function and data from the sensor nodes. An in-cluster algorithm is then derived using the incremental subgradient method. Sensors in each cluster successively update a cluster parameter estimate based on local data, which is then passed on to a fusion center for further processing. We prove convergence results for the distributed in-cluster algorithm, and provide simulations that demonstrate the benefits clustering for least squares and robust estimation in sensor networks. Index Terms Distributed estimation, optimization, incremental subgradient method, clustering, wireless sensor networks. This research was supported in part by the ONR under grant N , the Army Research Office under grant DAAD , Draper Laboratory under IR&D 600 grant DL-H-54663, and the National Science Foundation under grant CCR Portions of this work were presented at the 005 IEEE International Conference on Wireless Networks, Communications, and Mobile Computing, Maui, HI, USA, June 3-7, 005. The authors are with the Department of Electrical Engineering, Princeton University, Princeton, NJ USA. {sungson, chiangm, kulkarni, stuart}@princeton.edu.

2 I. INTRODUCTION A wireless sensor network (WSN) is comprised of a fusion center and a set of geographically distributed sensor nodes. The fusion center provides a central point in the network to consolidate the sensor data while the sensor nodes collect information about a state of nature. In our scenario, the fundamental objective of a WSN is to reconstruct that state of nature, e.g., estimation of a parameter, given the sensor observations. Depending on the application and the resources of the WSN, many possible algorithms exist that solve this parameter estimation problem. One failsafe approach that accomplishes this objective is the centralized approach in which all sensor nodes send their observations to the fusion center and allow the fusion center to make the parameter estimation. The centralized scheme allows the most information to be present when making the inference. However, the main drawback is the drainage of energy resources from each sensor of the WSN []. In an energy constrained WSN, the energy expenditure of transmitting all the observations to the fusion center might be too costly, thus making the method highly energy inefficient. In our application, the purpose of a WSN is to make an inference, not collect all the sensor observations. Another approach avoids the fusion center altogether and allows the sensors to collaboratively make the inference. This approach is referred to as the distributed in-network scheme, recently proposed by Rabbat and Nowak []. First, consider a path that passes through all the sensor nodes and visits each node only once. The path hops from one neighbor node to another until all the sensor nodes are covered. Instead of passing the data along the sensor node path, a parameter estimate is passed from node to node. As the parameter estimate passes through each node, each node updates the parameter estimate with its own local observations. The distributed in-network approach significantly reduces the transmission energy for communication required by the network. However, this approach has drawbacks in terms of latency, accuracy, and convergence. While the centralized approach takes one iteration to have access to all data, the distributed approach takes n iterations to have seen all the data captured by the network, where n is the number of sensors. Also, the parameter estimate of the distributed in-cluster algorithm is less accurate when com-

3 3 pared to the parameter estimate of the centralized algorithm. In terms of the number of iterations, the distributed in-network scheme converges slower than the centralized scheme. The distributed in-network scheme remedies the issue of energy inefficiency, but suffers in terms of these other performance parameters. In this paper, we consider a hybrid form of the two aforementioned approaches. While the former approach relies heavily on the fusion center and the latter approach eliminates the fusion center altogether, we allow the fusion center to minimally interact with the sensor nodes. We formulate a distributed in-cluster approach where the nodes are clustered and there exists a path within each cluster that passes through each node only once as shown in Fig.. While the precise mathematical formulation of the algorithm is stated in Section IV, roughly speaking, each cluster operates similarly to the distributed in-network scheme. Within each cluster, every sensor node updates its own parameter estimate based on its own local observations. Hence, each cluster has its own parameter estimate. The sensor node that initiates the algorithm is designated to be the cluster head. After completion of all the iterations within each cluster, the parameter estimate of each cluster is then passed to the fusion center and averaged. Then, the fusion center announces the average parameter value back to the cluster heads to start another set of cluster iterations if necessary. The purpose of clustering is to address the inherent inflexibility of both the centralized and the in-network algorithms. For example, if the WSN application calls for the most accurate estimate regardless of the communication costs, then the centralized algorithm would suffice. If the WSN demands the most energy efficient algorithm irrespective of the other performance parameters like latency, accuracy, and convergence speed, then the distributed in-network algorithm would be most suitable. However, given a WSN application with specific accuracy demands or energy constraints, are we able to develop an algorithm that is tailored to those desired performance levels? With the distributed in-cluster algorithm, it is more feasible since the number of clusters, or equivalently the size of the clusters, adds another dimension to the algorithm development process. We are also able to adjust the cluster size to accommodate the WSN requirements. Throughout the rest of the paper, we consider the following criteria in comparing the distributed

4 4 in-cluster, distributed in-network and centralized algorithms: Energy efficiency Latency Estimation accuracy Flexible tradeoff among accuracy, robustness, and speed of convergence We show that the proposed distributed in-cluster algorithm adds a flexible tradeoff among all the aforementioned criteria. Specifically, due to clustering, we are able to control the estimation accuracy since the residual error scales as a function of the number of clusters. The inclusion of clusters improves the scaling behavior of the estimation accuracy and latency. We use the centralized and the distributed in-network algorithms as extreme cases of maximal and minimal energy usage, respectively. For the special case where the WSN has n clusters with each cluster having n sensors, we show that the transport cost of the distributed in-cluster algorithm has the same order of magnitude as the distributed in-network algorithm. However, the latency and accuracy improve by factor of n under this specific clustering situation. The organization of the paper is as follows. Previous work in the areas of distributed incremental estimation and clustering is discussed in Section II. We formulate the problem and provide two concrete applications in Section III. The distributed in-cluster algorithm is precisely formulated in Section IV and convergence analysis is discussed in Section V. Section VI surveys the benefits of clustering on the performance parameters. Analytical results are verified in Section VII by simulations that involve two applications, least-squares and robust estimation, followed by the conclusion in Section VIII. II. PREVIOUS WORK The ideas of incremental subgradient methods and clustering applied to distributed estimation are quite prevalent in the literature. Incremental subgradient methods were first studied by Kibardin [3] and then, more recently, by Nedić and Bertsekas [4]. Then, Rabbat and Nowak [] applied the framework used in [4] to handle the issue of energy consumption in distributed estimation in WSNs. To further save on energy expenditure, Rabbat and Nowak [5] also implemented a

5 5 quantization scheme for distributed estimation, in which they showed that quantization does not effect the rate of convergence for the incremental algorithms. Clustering schemes have been implemented throughout the area of WSNs to provide a hierarchal structure to minimize the energy spent in the system (see [6] and references therein). The purpose of clustering is either to minimize the number of hops the data needs to arrive at a destination or to provide a fusion point within the network to consolidate the amount of data sent. In our work, clustering is applied to the framework of distributed incremental estimation to provide a better scaling law for estimation accuracy and latency in relation to the number of clusters. III. PROBLEM FORMULATION Consider a WSN with n sensors with each sensor taking m measurements. The parameter estimation objective of the WSN can be viewed as a convex optimization problem if the distortion measure between the parameter estimate and the data is convex. The problem is minimize f(x, θ) subject to θ Θ () where f : R nm+ R is a convex cost function, x R nm is a vector of all the observations collected by the WSN, θ is a scalar, and Θ is a nonempty, closed and convex subset of R. Note that x are the constants while θ is the variable. One method to decompose Problem () into a distributed optimization problem is to assume that the cost function has an additive structure. The additive property states that the social cost function given all the WSN data, f(x, θ), can be expressed as the normalized sum of individual cost functions given only individual sensor data, f i (x i, θ). Hence, the problem becomes minimize f(x, θ) = n n i= f i(x i, θ) subject to θ Θ () where f i (x i, θ) : R m+ R is a convex local cost function for the ith sensor only using its own measurement data x i R m.

6 6 Although the additive property does not hold for general cost functions, two important applications in estimation satisfy this property: least squares estimation and robust estimation with the Huber loss function. A. Least Squares Estimation The simplest estimation procedure is least-squares estimation. For the classical least squares estimation problem, the distortion measure is f(x, θ) = x θ, where is the Euclidean norm. Clearly, the least-squares distortion measure is convex and additive. Hence, the optimization problem is formulated as follows, minimize n n i= subject to θ Θ { m m p= xp i θ } (3) where f i (x i, θ) = m m p= xp i θ and x p i denotes the pth entry in the vector of observations from sensor node i. The beauty of least-squares estimation lies in its simplicity, but the technique is prone to suffer greatly in terms of accuracy if some of the measurements are not as accurate as the others. If some measurements have higher variances than other measurements, the least-squares inference procedure does not take this effect into account. Thus, the least squares procedure is highly sensitive to large deviations. To make the inference procedure more robust to these types of deviations, the following robust estimation procedure is often used. B. Robust Estimation Another practical application is the robust estimation problem, which has the following form, minimize n n i= subject to θ Θ { m m p= f H(x p i, θ) } (4) where x p f H (x p i, θ) = i θ /, if x p i θ γ (5) γ x p i θ γ /, if x p i θ > γ

7 7 and γ 0 is the Huber loss function constant [7]. Note, f i (x i, θ) = m m p= f H(x p i, θ). The purpose behind robust estimation is to introduce a new distortion measure that puts more weight on good measurements, and less weight on, or even discards, bad measurements. The parameter γ sets the threshold for the measurement values around the parameter estimate θ, i.e., the values within a γ range of θ are considered good measurements and the values outside a γ range of θ are considered bad measurements. As γ, robust estimation reduces to least-squares estimation. IV. DISTRIBUTED IN-CLUSTER ALGORITHM To solve a convex optimization problem like (), the most common method used is a gradient descent method. Given any starting point ˆθ dom f, update ˆθ by descending along the gradient of f. To formalize this procedure, let ˆθ new = ˆθ old + α ˆθ, (6) where α is the step size and ˆθ = f. The convexity of the function f guarantees that a local minimum will be a global minimum. However, if the function f is not differentiable, then a subgradient can be used. A subgradient of any convex function f(x) at a point y is any vector g such that f(x) f(y) + (x y) T g, x. For a differentiable function, the subgradient is just the gradient. Along with convexity, if the cost function has an additive structure, a variant of the subgradient method can be used. This method is called the incremental subgradient method [8]. The key idea of the incremental subgradient algorithm is to sequentially take steps along the subgradients of the marginal function f i (x i, θ) instead of taking one large step along the subgradient of f(x, θ). In doing so, the parameter estimate can be adjusted by the subgradient of each individual cost function given only its individual observations. Following this procedure leads to the distributed in-network algorithm []. The convergence results for the algorithm follow directly from the incremental subgradient method. We now describe the in-cluster algorithm. Consider a WSN with clusters and n S sensors per cluster, where n S = n. Assume that n S and are factors of n. Note that the distributed

8 8 in-network algorithm can be viewed as a special case of the distributed in-cluster algorithm where each sensor is its own cluster, = and n S = n. We use i = to index sensor nodes, j to index clusters, and k to index the iteration number. Let i = 0 represent the cluster head. Let f i,j (x i,j, θ) and φ i,j,k denote the local cost function and the parameter estimate at node i in cluster j during iteration k, respectively. For conciseness, we suppress the dependency of f on the parameters in the notation and let f i,j (x i,j, θ) = f i,j (θ) and f(x, θ) = f(θ). Also, let θ k be the estimate maintained by the fusion center during iteration k. We need to update θ k for k =,,..., by local updates of φ i,j,k. Each iteration k of the distributed in-cluster algorithm proceeds as follows: ) Fusion center passes the current estimate θ k to the cluster heads in all clusters. Cluster heads initialize φ 0,j,k = θ k, j. ) Incremental update is conducted in parallel in all the clusters. Within each cluster, the updates are conducted through update paths that traverse all the nodes in each cluster: φ i,j,k = φ i,j,k α k g i,j,k n where α k is the step size and g i,j,k is a subgradient of f i,j (7) using the last estimate φ i,j,k and the local measurement data x i,j. This is denoted as g i,j,k δf i,j (φ i,j,k ). If φ i,j,k / Θ, then project φ i,j,k onto a nearest point in the a priori constraint set Θ. 3) All clusters pass the last in-cluster estimate φ ns,j,k to the fusion center, which takes the average to produce the next estimate θ k+ = nc φ ns,j,k. 4) Repeat. In step 3 of the distributed in-cluster algorithm, the fusion center may process the in-cluster estimates {φ ns,j,k} using a variety of methods, e.g. a weighted average depending on the signal to noise ratio of the observations. By involving the fusion center, this allows more flexibility in the

9 9 algorithm development. In the convergence proofs, we will consider the case where an average of the in-cluster estimates was performed. V. CONVERGENCE Following the approach for the distributed incremental subgradient algorithm in [8], we show convergence for the distributed in-cluster approach. The main difference in our proofs is the emergence of the clustering values, and n S. In these proofs, we make reasonable assumptions that the optimal solution exists and the subgradient is bounded as shown in the following statements. Let the true underlying state of the environment be the (finite) minimizer, θ, of the cost function. Also, assume there exists scalars C i,j 0 such that C i,j g i,j,k for all i =,..., n S, j =,...,, and k. We start with the following lemma that is true for each cluster parameter estimate {φ i,j,k }. Lemma : Let {φ i,j,k } be the sequence of subiterations generated by Eq. (7). Then for all y Θ and for k 0, φ i,j,k y φ i,j,k y α k ( fi,j (φ i,j,k ) f i,j (y) ) + α k n n C i,j i, j. (8) See the Appendix for the proof. By summing all the inequalities in Eq. (8) over all i =,..., n S and j =,...,, we have the following lemma for each parameter estimate {θ k }. Lemma : Let {θ k } be a sequence generated by distributed in-cluster method. Then, for all y Θ and for k 0 θ k+ y θ k y α k ( f(θk ) f(y) ) + α k Ĉ, (9) n where Ĉ = { n S i= C i,j}. See the Appendix for the proof. Lemma guarantees that the sequence {θ k } k= gets smaller provided α k < n (f(θ k ) f(y)). This Ĉ key lemma is a crucial step in proving all the subsequent theorems.

10 0 Theorem : Let {θ k } be a sequence generated by the distributed in-cluster method. Then, for a fixed step-size, α k = α, and using Lemma, we have lim inf k where f(θ ) = inf θ Θ f(θ) and Ĉ = { n S i= C i,j}. See the Appendix for the proof. f(θ k) f(θ ) + αĉ n (0) If all the subgradients g i,j,k are bounded by one scalar C, we have the following corollary. Corollary : Let C 0 = max i,j C i,j. It is evident that C 0 g i,j,k for all i =,..., n S and j =,...,. Then, for a fixed step-size, α k = α, lim inf k f(θ k) f(θ ) + αc 0. () Since the incremental subgradient method is a primal feasible method, a lower bound of f(θ ) is always satisfied. Therefore, the sequence of estimates f(θ k ) will eventually be trapped between f(θ ) and f(θ ) + αc 0. The fluctuation around the equilibrium, R = αc 0, () is the residue estimation error due to the fact that a constant step size is used for subgradient methods. Comparing Corollary with both the standard results in [8] and the result used by Rabbat and Nowak in [], we observe that in our case, we have a smaller threshold tolerance. Using the same assumptions as iorollary, in the distributed in-network case, lim inf k f(θ k) f(θ ) + αc 0. (3) Thus, as k, the distributed in-network algorithm converges to a ( αc 0 - suboptimal) solution, whereas, the distributed in-cluster algorithm converges to a ( αc 0 - suboptimal) solution. We observe a key advantage of the in-cluster approach: estimation accuracy is tighter by a factor of. Even for medium scale sensor networks, a factor of can be an order of magnitude improvement. The next theorem provides the necessary number of iterations, K, to achieve a certain desired estimation accuracy.

11 Theorem : Let {θ k } be a sequence generated by the distributed in-cluster method. Then, for a fixed step-size α and for any positive scalar ɛ, min 0 k K f(θ k) f(θ ) + ( ) αĉ + ɛ n (4) where K is given by See the Appendix for the proof. nc θ 0 θ K = αɛ. (5) Along the same lines as Theorem and Corollary, we have the following corollary. Corollary : Let C 0 = max i,j C i,j. Then, for a fixed step-size α and for any positive scalar ɛ where K is given by Eq. (5). min f(θ k) f(θ ) + ( αc 0 0 k K ) + ɛ By observing Eq. (6), we see that the index k refers to the parameter estimate obtained at the end of each cluster cycle. Since each cluster has n S iterations, the total number of iterations required for an accuracy of αc ( ) 0 + ɛ is n nc θ 0 θ S. In comparison, for the dis- αɛ tributed in-network case, the total number of iterations required for an accuracy of (αc 0 + ɛ) is n θ0 θ. Therefore, for both algorithms, the total number of iterations necessary is the αɛ same order of magnitude, while the benefit of distributed in-cluster algorithm over the distributed in-network is an accuracy improvement by a factor of. Another natural extension is varying the step-size. For a fixed step-size, complete convergence can not be achieved. The parameter estimates, {θ k }, enter a limit cycle after an arbitrary number of iterations. To force convergence to the optimal value, f(θ ), the step-size can be set to diminish at a rate inversely proportional to the number of iterations, α k = α. More generally, we have the k following theorem. Theorem 3: Let {θ k } be a sequence generated by the distributed in-cluster method. Also, assume that the step-size α k satifies (6) lim α k = 0 k and α k =, k=0

12 then, lim inf k f(θ k) = f(θ ). See the Appendix for the proof. A. Energy Efficiency VI. BENEFITS OF CLUSTERING ON PERFORMANCE PARAMETERS The main expenditure of energy for a WSN is in the cost of communication. This entails transporting bits either from sensor to sensor or from sensor to fusion center. So, the transport cost would be a good measure of the energy usage for a WSN. For example, consider our original WSN consisting of n sensors where each sensor has m measurements. The sensors are distributed randomly (uniformly) over one square meter. We use bit-meters as the metric to measure the transport cost in the transmission of data. In the centralized setting, all n sensors send their m observations to the fusion center, requiring O(mn) bits to be transmitted over an average distance of O() meters. In total, the transport cost is O(mn) bit-meters. In the distributed in-network setting, all n sensors use their m observations to update the parameter estimate and pass the parameter estimate along the path that contains all the sensor nodes. The distributed in-network method needs O(n) bits to be transmitted over an average distance of O( n ). In total, the transport cost is O( n). In the distributed in-cluster setting, the sensor network forms clusters with n S nodes per cluster. This method requires O(n) bits to be transmitted over an average distance of O( n ) meters which accounts for the sensor to sensor transport cost and O( ) bits to be transmitted over an average distance of O() meters which accounts for the sensor to fusion center transport cost. Thus, the transport cost is O( n + ) bit-meters. An interesting result arises when the cluster size and the number of sensors per cluster are equal, = n S = n. For this case, the total transport cost of the distributed in-cluster algorithm becomes O( n). The distributed in-cluster algorithm and the distributed in-network algorithm have the same magnitude in terms of transport cost when = n S = n.

13 3 B. Latency Latency is defined as the number of iterations needed to see all the data captured by the network. For the centralized case, only one iteration is needed while for the in-network case, n iterations are needed. However, the in-cluster algorithm, the latency of the WSN can be adjusted by the size of the cluster. The latency for the in-cluster case reduces to n shown in Table I as shown in Table I. or more simply, n S iterations as C. Estimation Accuracy By forming clusters in a WSN, estimation accuracy can be improved. For the fixed step-size case, the estimation accuracy is reduced by a factor of when compared to the distributed innetwork case as shown in Table I. The accuracy improvement by a factor of holds for both cases where k tends toward infinity and k is finite as shown iorollary and Corollary, respectively. VII. SIMULATIONS, VARIATIONS, AND EXTENSIONS Consider a WSN with 00 sensors uniformly distributed over a region, each taking 0 measurements each. The observations are independent and identically distributed from measurement to measurement, observations are independent from sensor to sensor. If the sensors are working properly, the measurements are distributed by a Gaussian distribution with mean 0 and variance and if the sensors are defective, the measurements are distributed by a Gaussian distribution with mean 0 and variance 00. This application can be viewed as a deterministic mean-location parameter estimation problem. The simulations assume that 0% of the sensor nodes are damaged and a fixed step-size of α k = 0.4 is used. As summarized in this section, a variety of simulations are conducted to verify the theorems and characterize other properties and tradeoffs in the proposed distributed in-cluster algorithm. A. Basic Simulations Least squares estimation and Huber robust estimation are simulated, and the resulting convergence behavior of the residual value is shown in Figs. and 3, respectively. In both figures, the

14 4 distributed in-network method and the distributed in-cluster method are shown by a solid line while the centralized method is shown by a dashed line. Since a fixed step size of α k = 0.4 is used, there are residue fluctuations around the equilibrium. In both estimation procedures, an increase in the number of clusters causes a decrease in fluctuations. The precise data points confirm the theoretical prediction: the distributed in-cluster method fluctuation is smaller than the distributed in-network fluctuation. By observing the least squares estimation plots in Fig., when = 4 and = 0, the fluctuations are smaller by a factor of 4 and 0, respectively, compared to the plot when =. In the robust estimation example of Fig. 3, we use a Huber parameter of γ =. The distributed in-cluster method fluctuation again shows narrower fluctuations and is almost indistinguishable from the centralized estimation curve. B. Accuracy, Robustness and Speed Tradeoff To determine the tolerance bounds for the robust estimation procedure, the gradient of the Huber loss function is calculated. Since f i (θ) = m m p= f H(x p i, θ), it is clear that f i (θ) γ by differentiation, while f i (θ) C 0 by definition. Hence, C 0 can be set to equal γ to provide an upper bound for the gradient. This gives the following results that analytically characterize the tradeoff among three competing criteria: accuracy, robustness, and speed of convergence for incremental estimation. Combining Eq. () with the relation that C 0 = γ, we have the following formula. For a given network size n, the tradeoff among estimation error bound R, Huber robustness parameter γ, and constant step size α is characterized by: R = αγ. (7) For example, to maintain a desired level of robustness γ, tighter convergence bounds (smaller R) implies slower convergence speed (smaller α). As another example, to get tighter convergence bounds, we would like R to be small. This can be achieved by either reducing α, which means

15 5 smaller step size and slower convergence speed, or reducing γ, which means accepting less reliable data for Huber estimation and reducing the robustness as well as the speed of convergence, since less reliable data are used for estimation. An illustrative example is shown in Fig. 4, where we reduce γ by a factor of (cf. Figs. 4a and 4b). This reduction of γ reduces the estimation error by about a factor of 4 but also increases the convergence time by roughly a factor of. The term in the tradeoff characterization in Eq. (7) again highlights an advantage of the in-cluster approach: more clusters help achieve a more efficient tradeoff. VIII. CONCLUSION We have presented a distributed in-cluster scheme for a WSN that uses the incremental subgradient method. By incorporating clustering within the sensor network, we have created a degree of freedom which allows us to tune the algorithm for energy efficiency, estimation accuracy, convergence speed and latency. Specifically, in terms of estimation accuracy, we have shown that a different scaling law applies to the clustered algorithm: the residual error is inversely proportional to the number of clusters. Also, for the special case where a WSN with n sensors forms n clusters, we are able to maintain the same transport cost as the distributed-in-network scheme, while increasing both accuracy of the estimate and convergence speed, and reducing latency. Simulations have been provided for both least squares and robust estimation. We plan to extend our work by relaxing the independence assumption of sensor to sensor observations. In particular, in future work, we will consider a WSN scenario where the data within each cluster are spatially correlated, while the data from cluster to cluster are independent. REFERENCES [] G. J. Pottie and W. J. Kaiser, Wireless integrated network sensors, Communications of the ACM, vol. 43, no. 5, pp. 5 58, May 000. [] M. Rabbat and R. Nowak, Distributed optimization in sensor networks, in Proceedings of the Third International Symposium on Information Processing in Sensor Networks (IPSN 04), Berkeley, CA, 004. [3] V. M. Kibardin, Decomposition into functions in the minimization problem, Automation and Remote Control, vol. 40, pp. 3 33, 980.

16 6 [4] A. Nedić and D. P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization, Tech. Rep., Massachusetts Institute of Technology, Cambridge, MA, 999. [5] M. Rabbat and R. Nowak, Quantized incremental algorithms for distributed optimization, IEEE Journal on Selected Areas iommunications, vol. 3, no. 4, pp , April 005. [6] S. Bandyopadhyay and E. Coyle, An energy efficient hierarchical clustering algorithm for wireless sensor networks, in Proceedings of the nd Annual Joint Conference of the IEEE Computer and Communications Societies (Infocom 003), San Francisco, CA, 003. [7] P. J. Huber, Robust Statistics, John Wiley & Sons, New York, 98. [8] D. P. Bertsekas, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, April 003. A. Proof of Lemma Proof: φ i,j,k y = φ i,j,k α k g i,j,k n APPENDIX y φ i,j,k y α k n (φ i,j,k y)g i,j,k + α k n g i,j,k φ i,j,k y α k n (f i,j(φ i,j,k ) f i,j (y)) + α k n C i,j. The last line of the proof uses the fact that g i,j,k is a subgradient of the convex function f i,j at φ i,j,k. Thus, f i,j (φ i,j,k ) f i,j (y) + (φ i,j,k y)g i,j,k. B. Proof of Lemma Proof: θ k+ y = = ( φns,j,k ) y ( φns,j,k y ) φ ns,j,k y { φ ns,j,k y α k n ( fns,j(φ ns,j,k) f ns,j(y) ) } + α k n C i,j

17 7 where in the third and fourth lines of the proof, we used the Quadratic Mean-Arithmetic Mean inequality and Lemma, respectively. After recursively decomposing φ i,j,k y n S times, we get θ k+ y Then, using the fact that { φ 0,j,k y α k n = θ k y α k n = θ k y α k n + α k n the expression simplifies to n S i= θ k+ y = θ k y α k Then, + α k n n S i= n S C i,j. f(θ k ) = n n S i= { θ k+ y θ k y α k + α k n i= n S i= ( fi,j (φ i,j,k ) f i,j (y) ) + α k n ( fi,j (φ i,j,k ) f i,j (y) ) + α k n } n S Ci,j i= n S Ci,j i= ( fi,j (φ i,j,k ) f i,j (y) + f i,j (θ k ) f i,j (θ k ) ) n S i= f i,j (θ k ), f(θ k ) f(y) + n C i,j. { n S Ci,j i= θ k y α k + α k n { n S Ci,j i= f(θ k ) f(y) + n f(θ k ) f(y) + n n S i= = θ k y α k ( f(θk ) f(y) ) n { C { + α n S i } k C n i,j C m,j + i= m= = θ k y α k ( f(θk ) f(y) ) + α k n ( fi,j (φ i,j,k ) f i,j (θ k ) )} n S i= n S i= C i,j φ i,j,k θ k C i,j n S Ci,j i= } α k n i m= { ns } C i,j i= } C m,j }

18 8 where in the second and third inequalities, we used the fact that f i,j (θ k ) f i,j (φ i,j,k ) C i,j φ i,j,k θ k and respectively. φ i,j,k θ k α k n i C m,j, i =,..., n S, m= C. Proof of Theorem Proof: Proof is by contradiction. If Thm. is not true, then there exists an ɛ > 0 such that Let z Θ be the value so that lim inf k lim inf k f(θ k) > f(θ ) + αĉ n + ɛ. f(θ k) f(z) + αĉ n + ɛ, and let k 0 be sufficiently large so that for all k k 0, we have f(θ k ) lim inf k f(θ k) ɛ. Then, by combining the above two relations with Lemma and setting y = z, we have, for all k k 0 θ k+ z θ k z αɛ. Therefore, which cannot hold for sufficiently large k. θ k+ z θ k z αɛ θ k z 4αɛ. θ k0 z (k + k 0)αɛ

19 9 D. Proof of Theorem Proof: Proof is by contradiction. Assume that for all k with 0 k K, we have ( ) f(θ k ) > f(θ ) + αĉ + ɛ. n By setting α k = α and y = θ in Lemma and by combining that with the above relation, we have for all k with 0 k K, θ k+ θ θ k θ α (f(θ k ) f(θ )) + α Ĉ n n ( ( )) C θ k θ α αĉ + ɛ α Ĉ n n = θ k θ αɛ. If we sum the above inequalities over k for k = 0,,..., K, we have θ K+ θ θ 0 θ (K + ) αɛ. Thus, it is evident that θ 0 θ (K + ) αɛ 0, which contradicts the definition of K. E. Proof of Theorem 3 Proof: Proof is by contradiction. If Thm. 3 does not hold, then there exists an ɛ > 0 such that lim inf k f(θ k) ɛ > f(θ ). Then, using the convexity of f and Θ, there exists a point z Θ such that lim inf k f(θ k) ɛ f(z) > f(θ ). Let, there be a k 0 large enough that for all k k 0, we have f(θ k ) lim inf k f(θ k) ɛ.

20 0 Then, by combining the above two relations, we have for all k k 0, f(θ k ) f(z) ɛ. By setting y = z in Lemma and by combining that with the above relation, we obtain for all k k 0, = θ k z α k α kĉ θ k+ z θ k z α k ɛ + n n ( C ) Since α k 0, we may assume that k 0 is large enough so that ɛ α kĉ n. Thus, for all k k 0, we have which cannot hold for sufficiently large k. ɛ α kĉ n ɛ, k k 0. θ k+ z θ k z α kɛ θ k z (α k + α k )ɛ. θ k0 z ɛ k j=k 0 α j,

21 q k Fusioenter Fig.. Illustration of a sensor network implementing the distributed in-cluster algorithm. The dash-dotted lines represent the borders of the clusters. The shaded nodes represent the cluster heads that communicate with the fusion center. All clusters run the algorithm in parallel, although in the schematic only the lower-right cluster is shown running the incremental subgradient algorithm.

22 Energy Efficiency Latency Estimation Accuracy (bit-meters) (iterations) (residual error) centralized O(mn) 0 distributed in-network O( n) n αc 0 distributed in-cluster O( n + ) n αc 0 special case: ( = n) O( n) n αc 0 n TABLE I SUMMARY OF PERFORMANCE TRADEOFFS AMONG DIFFERENT ALGORITHMS IS SHOWN. NOTE, IN THE SPECIAL CASE WHERE = n, THE DISTRIBUTED IN-CLUSTER CASE AND IN-NETWORK ALGORITHMS HAVE THE SAVE TRANSPORT COST, BUT THE LATENCY AND ESTIMATION ACCURACY IS IMPROVED BY A FACTOR OF / n.

23 3 5 4 Fixed Stepsize, Percent Damaged = 0 centralized distributed in network 3 Least Squares Residual Value Total Number of Iterations (a) Least squares estimation ( = and n S = 00) 5 4 Fixed Stepsize, Percent Damaged = 0 centralized distributed in cluster 3 Least Squares Residual Value Total Number of Iterations (b) Least squares estimation ( = 4 and n S = 5) 5 4 Fixed Stepsize, Percent Damaged = 0 centralized distributed in cluster 3 Least Squares Residual Value Total Number of Iterations (c) Least squares estimation ( = 0 and n S = 0) Fig.. Plots of least squares residual value vs. total number of iterations for three different clustering scenarios are shown. (a) Distributed in-network algorithm, (b) Distributed in-cluster algorithm with = 4 and n S = 5, (c) Distributed in-cluster algorithm with = 0 and n S = 0.

24 4 5 4 Fixed Stepsize, Percent Damaged = 0 centralized distributed in network Robust Residual Value Total Number of Iterations (a) Robust estimation ( = and n S = 00) 5 4 Fixed Stepsize, Percent Damaged = 0 centralized distributed in cluster Robust Residual Value Total Number of Iterations (b) Robust estimation ( = 4 and n S = 5) 5 4 Fixed Stepsize, Percent Damaged = 0 centralized distributed in cluster Robust Residual Value Total Number of Iterations (c) Robust estimation ( = 0 and n S = 0) Fig. 3. Plots of robust residual value vs. total number of iterations for three different clustering scenarios are shown. (a) Distributed in-network algorithm, (b) Distributed in-cluster algorithm with = 4 and n S = 5, (c) Distributed in-cluster algorithm with = 0 and n S = 0.

25 5 0.8 Fixed Stepsize, Percent Damaged = 0 centralized distributed in cluster Robust Residual Value Total Number of Iterations (a) Robust Estimation with γ =. 0.8 Fixed Stepsize, Percent Damaged = 0 centralized distributed in cluster Robust Residual Value Total Number of Iterations (b) Robust Estimation with γ = 0.5 Fig. 4. Plots of robust residual value vs. total number of iterations for the case where = 0 and n S = 0 are shown. The plots are rescaled for clarity. (a) Robust estimation with γ =, (b) Robust estimation with γ = 0.5.

Lecture 19 Subgradient Methods. November 5, 2008

Lecture 19 Subgradient Methods. November 5, 2008 Subgradient Methods November 5, 2008 Outline Lecture 19 Subgradients and Level Sets Subgradient Method Convergence and Convergence Rate Convex Optimization 1 Subgradients and Level Sets A vector s is a

More information

Distributed Optimization in Sensor Networks

Distributed Optimization in Sensor Networks Distributed Optimization in Sensor Networks Michael Rabbat and Robert Nowak Abstract Wireless sensor networks are capable of collecting an enormous amount of data over space and time. Often, the ultimate

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California,

More information

Michael G. Rabbat, Student Member, IEEE, and Robert D. Nowak, Senior Member, IEEE

Michael G. Rabbat, Student Member, IEEE, and Robert D. Nowak, Senior Member, IEEE 798 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 4, APRIL 2005 Quantized Incremental Algorithms for Distributed Optimization Michael G. Rabbat, Student Member, IEEE, and Robert D. Nowak,

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Greedy Gossip with Eavesdropping

Greedy Gossip with Eavesdropping Greedy Gossip with Eavesdropping Deniz Üstebay, Mark Coates, and Michael Rabbat Department of Electrical and Computer Engineering McGill University, Montréal, Québec, Canada Email: deniz.ustebay@mail.mcgill.ca,

More information

Distributed Alternating Direction Method of Multipliers

Distributed Alternating Direction Method of Multipliers Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Multihop Hierarchical MIMO A Multicast Structure in wireless ad hoc networks

Multihop Hierarchical MIMO A Multicast Structure in wireless ad hoc networks Multihop Hierarchical MIMO A Multicast Structure in wireless ad hoc networks January 11, 2008 Abstract In this paper, we study multicast in large-scale wireless ad hoc networks. Consider N nodes that are

More information

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks Compressive Sensing for Multimedia 1 Communications in Wireless Sensor Networks Wael Barakat & Rabih Saliba MDDSP Project Final Report Prof. Brian L. Evans May 9, 2008 Abstract Compressive Sensing is an

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization

Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Random Walk Distributed Dual Averaging Method For Decentralized Consensus Optimization Cun Mu, Asim Kadav, Erik Kruus, Donald Goldfarb, Martin Renqiang Min Machine Learning Group, NEC Laboratories America

More information

Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet. Y. C. Pati R. Rezaiifar and P. S.

Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet. Y. C. Pati R. Rezaiifar and P. S. / To appear in Proc. of the 27 th Annual Asilomar Conference on Signals Systems and Computers, Nov. {3, 993 / Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Outline Task-Driven Sensing Roles of Sensor Nodes and Utilities Information-Based Sensor Tasking Joint Routing and Information Aggregation Summary Introduction To efficiently

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

Cascaded Coded Distributed Computing on Heterogeneous Networks

Cascaded Coded Distributed Computing on Heterogeneous Networks Cascaded Coded Distributed Computing on Heterogeneous Networks Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT,

More information

Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, and Christos G. Cassandras, Fellow, IEEE

Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, and Christos G. Cassandras, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 12, DECEMBER 2010 2735 Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, Christos G. Cassras,

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey Chapter 4 : Optimization for Machine Learning Summary of Chapter 2 Chapter 2: Convex Optimization with Sparsity

More information

Optimal Channel Selection for Cooperative Spectrum Sensing Using Coordination Game

Optimal Channel Selection for Cooperative Spectrum Sensing Using Coordination Game 2012 7th International ICST Conference on Communications and Networking in China (CHINACOM) Optimal Channel Selection for Cooperative Spectrum Sensing Using Coordination Game Yuhua Xu, Zhan Gao and Wei

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks SaeedA.AldosariandJoséM.F.Moura Electrical and Computer Engineering Department Carnegie Mellon University 5000 Forbes

More information

DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling

DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS. Yujiao Cheng, Houfeng Huang, Gang Wu, Qing Ling DISTRIBUTED NETWORK RESOURCE ALLOCATION WITH INTEGER CONSTRAINTS Yuao Cheng, Houfeng Huang, Gang Wu, Qing Ling Department of Automation, University of Science and Technology of China, Hefei, China ABSTRACT

More information

Using Hybrid Algorithm in Wireless Ad-Hoc Networks: Reducing the Number of Transmissions

Using Hybrid Algorithm in Wireless Ad-Hoc Networks: Reducing the Number of Transmissions Using Hybrid Algorithm in Wireless Ad-Hoc Networks: Reducing the Number of Transmissions R.Thamaraiselvan 1, S.Gopikrishnan 2, V.Pavithra Devi 3 PG Student, Computer Science & Engineering, Paavai College

More information

The Cross-Entropy Method

The Cross-Entropy Method The Cross-Entropy Method Guy Weichenberg 7 September 2003 Introduction This report is a summary of the theory underlying the Cross-Entropy (CE) method, as discussed in the tutorial by de Boer, Kroese,

More information

Optimal network flow allocation

Optimal network flow allocation Optimal network flow allocation EE384Y Project intermediate report Almir Mutapcic and Primoz Skraba Stanford University, Spring 2003-04 May 10, 2004 Contents 1 Introduction 2 2 Background 2 3 Problem statement

More information

Robust Signal-Structure Reconstruction

Robust Signal-Structure Reconstruction Robust Signal-Structure Reconstruction V. Chetty 1, D. Hayden 2, J. Gonçalves 2, and S. Warnick 1 1 Information and Decision Algorithms Laboratories, Brigham Young University 2 Control Group, Department

More information

Monetary Cost and Energy Use Optimization in Divisible Load Processing

Monetary Cost and Energy Use Optimization in Divisible Load Processing 2004 Conference on Information Sciences and Systems, Princeton University, March 17 19, 2004 Monetary Cost and Energy Use Optimization in Divisible Load Processing Mequanint A. Moges, Leonardo A. Ramirez,

More information

Global Minimization via Piecewise-Linear Underestimation

Global Minimization via Piecewise-Linear Underestimation Journal of Global Optimization,, 1 9 (2004) c 2004 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Global Minimization via Piecewise-Linear Underestimation O. L. MANGASARIAN olvi@cs.wisc.edu

More information

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Yoshiaki Kuwata and Jonathan P. How Space Systems Laboratory Massachusetts Institute of Technology {kuwata,jhow}@mit.edu

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks

On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks On Distributed Algorithms for Maximizing the Network Lifetime in Wireless Sensor Networks Akshaye Dhawan Georgia State University Atlanta, Ga 30303 akshaye@cs.gsu.edu Abstract A key challenge in Wireless

More information

Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

More information

Bounds on the signed domination number of a graph.

Bounds on the signed domination number of a graph. Bounds on the signed domination number of a graph. Ruth Haas and Thomas B. Wexler September 7, 00 Abstract Let G = (V, E) be a simple graph on vertex set V and define a function f : V {, }. The function

More information

IMPROVING THE DATA COLLECTION RATE IN WIRELESS SENSOR NETWORKS BY USING THE MOBILE RELAYS

IMPROVING THE DATA COLLECTION RATE IN WIRELESS SENSOR NETWORKS BY USING THE MOBILE RELAYS IMPROVING THE DATA COLLECTION RATE IN WIRELESS SENSOR NETWORKS BY USING THE MOBILE RELAYS 1 K MADHURI, 2 J.KRISHNA, 3 C.SIVABALAJI II M.Tech CSE, AITS, Asst Professor CSE, AITS, Asst Professor CSE, NIST

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Dynamic Thresholding for Image Analysis

Dynamic Thresholding for Image Analysis Dynamic Thresholding for Image Analysis Statistical Consulting Report for Edward Chan Clean Energy Research Center University of British Columbia by Libo Lu Department of Statistics University of British

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications

More information

AM 221: Advanced Optimization Spring 2016

AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,

More information

The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs

The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs G. Ayorkor Mills-Tettey Anthony Stentz M. Bernardine Dias CMU-RI-TR-07-7 July 007 Robotics Institute Carnegie Mellon University

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 9, SEPTEMBER /$ IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 9, SEPTEMBER /$ IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 9, SEPTEMBER 2008 4081 Node-Based Optimal Power Control, Routing, Congestion Control in Wireless Networks Yufang Xi, Student Member, IEEE, Edmund M.

More information

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing 244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 2, APRIL 2002 Heuristic Algorithms for Multiconstrained Quality-of-Service Routing Xin Yuan, Member, IEEE Abstract Multiconstrained quality-of-service

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time Today Lecture 4: We examine clustering in a little more detail; we went over it a somewhat quickly last time The CAD data will return and give us an opportunity to work with curves (!) We then examine

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Online algorithms for clustering problems

Online algorithms for clustering problems University of Szeged Department of Computer Algorithms and Artificial Intelligence Online algorithms for clustering problems Summary of the Ph.D. thesis by Gabriella Divéki Supervisor Dr. Csanád Imreh

More information

ARELAY network consists of a pair of source and destination

ARELAY network consists of a pair of source and destination 158 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 55, NO 1, JANUARY 2009 Parity Forwarding for Multiple-Relay Networks Peyman Razaghi, Student Member, IEEE, Wei Yu, Senior Member, IEEE Abstract This paper

More information

Uncertainty Regarding Interpretation of the `Negligence Rule' and Its Implications for the Efficiency of Outcomes

Uncertainty Regarding Interpretation of the `Negligence Rule' and Its Implications for the Efficiency of Outcomes Jawaharlal Nehru University From the SelectedWorks of Satish K. Jain 2011 Uncertainty Regarding Interpretation of the `Negligence Rule' and Its Implications for the Efficiency of Outcomes Satish K. Jain,

More information

Online Facility Location

Online Facility Location Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

More information

AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang

AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION. Jialin Liu, Yuantao Gu, and Mengdi Wang AVERAGING RANDOM PROJECTION: A FAST ONLINE SOLUTION FOR LARGE-SCALE CONSTRAINED STOCHASTIC OPTIMIZATION Jialin Liu, Yuantao Gu, and Mengdi Wang Tsinghua National Laboratory for Information Science and

More information

A Reduction of Conway s Thrackle Conjecture

A Reduction of Conway s Thrackle Conjecture A Reduction of Conway s Thrackle Conjecture Wei Li, Karen Daniels, and Konstantin Rybnikov Department of Computer Science and Department of Mathematical Sciences University of Massachusetts, Lowell 01854

More information

Adaptive Filtering using Steepest Descent and LMS Algorithm

Adaptive Filtering using Steepest Descent and LMS Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 4 October 2015 ISSN (online): 2349-784X Adaptive Filtering using Steepest Descent and LMS Algorithm Akash Sawant Mukesh

More information

Minimum Delay Packet-sizing for Linear Multi-hop Networks with Cooperative Transmissions

Minimum Delay Packet-sizing for Linear Multi-hop Networks with Cooperative Transmissions Minimum Delay acket-sizing for inear Multi-hop Networks with Cooperative Transmissions Ning Wen and Randall A. Berry Department of Electrical Engineering and Computer Science Northwestern University, Evanston,

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Lecture 4: Convexity

Lecture 4: Convexity 10-725: Convex Optimization Fall 2013 Lecture 4: Convexity Lecturer: Barnabás Póczos Scribes: Jessica Chemali, David Fouhey, Yuxiong Wang Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1. ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

BELOW, we consider decoding algorithms for Reed Muller

BELOW, we consider decoding algorithms for Reed Muller 4880 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 11, NOVEMBER 2006 Error Exponents for Recursive Decoding of Reed Muller Codes on a Binary-Symmetric Channel Marat Burnashev and Ilya Dumer, Senior

More information

A New Combinatorial Design of Coded Distributed Computing

A New Combinatorial Design of Coded Distributed Computing A New Combinatorial Design of Coded Distributed Computing Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT, USA

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points

Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points Submitted to Operations Research manuscript (Please, provide the manuscript number!) Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points Ali Fattahi Anderson School

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

ME 555: Distributed Optimization

ME 555: Distributed Optimization ME 555: Distributed Optimization Duke University Spring 2015 1 Administrative Course: ME 555: Distributed Optimization (Spring 2015) Instructor: Time: Location: Office hours: Website: Soomin Lee (email:

More information

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization http://www.ee.kth.se/~carlofi/teaching/pwsn-2011/wsn_course.shtml Lecture 5 Stockholm, October 14, 2011 Fast-Lipschitz Optimization Royal Institute of Technology - KTH Stockholm, Sweden e-mail: carlofi@kth.se

More information

REDUCTION OF CODING ARTIFACTS IN LOW-BIT-RATE VIDEO CODING. Robert L. Stevenson. usually degrade edge information in the original image.

REDUCTION OF CODING ARTIFACTS IN LOW-BIT-RATE VIDEO CODING. Robert L. Stevenson. usually degrade edge information in the original image. REDUCTION OF CODING ARTIFACTS IN LOW-BIT-RATE VIDEO CODING Robert L. Stevenson Laboratory for Image and Signal Processing Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556

More information

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 5, MAY

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 5, MAY IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 55, NO 5, MAY 2007 1911 Game Theoretic Cross-Layer Transmission Policies in Multipacket Reception Wireless Networks Minh Hanh Ngo, Student Member, IEEE, and

More information

Diversity Coloring for Distributed Storage in Mobile Networks

Diversity Coloring for Distributed Storage in Mobile Networks Diversity Coloring for Distributed Storage in Mobile Networks Anxiao (Andrew) Jiang and Jehoshua Bruck California Institute of Technology Abstract: Storing multiple copies of files is crucial for ensuring

More information

Adaptations of the A* Algorithm for the Computation of Fastest Paths in Deterministic Discrete-Time Dynamic Networks

Adaptations of the A* Algorithm for the Computation of Fastest Paths in Deterministic Discrete-Time Dynamic Networks 60 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 3, NO. 1, MARCH 2002 Adaptations of the A* Algorithm for the Computation of Fastest Paths in Deterministic Discrete-Time Dynamic Networks

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Clustering-Based Distributed Precomputation for Quality-of-Service Routing*

Clustering-Based Distributed Precomputation for Quality-of-Service Routing* Clustering-Based Distributed Precomputation for Quality-of-Service Routing* Yong Cui and Jianping Wu Department of Computer Science, Tsinghua University, Beijing, P.R.China, 100084 cy@csnet1.cs.tsinghua.edu.cn,

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

Energy-Latency Tradeoff for In-Network Function Computation in Random Networks

Energy-Latency Tradeoff for In-Network Function Computation in Random Networks Energy-Latency Tradeoff for In-Network Function Computation in Random Networks P. Balister 1 B. Bollobás 1 A. Anandkumar 2 A.S. Willsky 3 1 Dept. of Math., Univ. of Memphis, Memphis, TN, USA 2 Dept. of

More information

6. Concluding Remarks

6. Concluding Remarks [8] K. J. Supowit, The relative neighborhood graph with an application to minimum spanning trees, Tech. Rept., Department of Computer Science, University of Illinois, Urbana-Champaign, August 1980, also

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

CSE 494 Project C. Garrett Wolf

CSE 494 Project C. Garrett Wolf CSE 494 Project C Garrett Wolf Introduction The main purpose of this project task was for us to implement the simple k-means and buckshot clustering algorithms. Once implemented, we were asked to vary

More information

Ameliorate Threshold Distributed Energy Efficient Clustering Algorithm for Heterogeneous Wireless Sensor Networks

Ameliorate Threshold Distributed Energy Efficient Clustering Algorithm for Heterogeneous Wireless Sensor Networks Vol. 5, No. 5, 214 Ameliorate Threshold Distributed Energy Efficient Clustering Algorithm for Heterogeneous Wireless Sensor Networks MOSTAFA BAGHOURI SAAD CHAKKOR ABDERRAHMANE HAJRAOUI Abstract Ameliorating

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

Distributed non-convex optimization

Distributed non-convex optimization Distributed non-convex optimization Behrouz Touri Assistant Professor Department of Electrical and Computer Engineering University of California San Diego/University of Colorado Boulder AFOSR Computational

More information

Routing with Mutual Information Accumulation in Energy-Limited Wireless Networks

Routing with Mutual Information Accumulation in Energy-Limited Wireless Networks Routing with Mutual Information Accumulation in Energy-Limited Wireless Networks Mahdi Shakiba-Herfeh Department of Electrical and Electronics Engineering METU, Ankara, Turkey 68 Email: mahdi@eee.metu.edu.tr

More information

554 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 2, FEBRUARY /$ IEEE

554 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 2, FEBRUARY /$ IEEE 554 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 2, FEBRUARY 2008 Cross-Layer Optimization of MAC and Network Coding in Wireless Queueing Tandem Networks Yalin Evren Sagduyu, Member, IEEE, and

More information

OPTIMAL LINK CAPACITY ASSIGNMENTS IN TELEPROCESSING AND CENTRALIZED COMPUTER NETWORKS *

OPTIMAL LINK CAPACITY ASSIGNMENTS IN TELEPROCESSING AND CENTRALIZED COMPUTER NETWORKS * OPTIMAL LINK CAPACITY ASSIGNMENTS IN TELEPROCESSING AND CENTRALIZED COMPUTER NETWORKS * IZHAK RUBIN UCLA Los Angeles, California Summary. We consider a centralized network model representing a teleprocessing

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Notes on Robust Estimation David J. Fleet Allan Jepson March 30, 005 Robust Estimataion. The field of robust statistics [3, 4] is concerned with estimation problems in which the data contains gross errors,

More information

Clustering: Classic Methods and Modern Views

Clustering: Classic Methods and Modern Views Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering

More information

Optimization. Industrial AI Lab.

Optimization. Industrial AI Lab. Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

Computational study of the step size parameter of the subgradient optimization method

Computational study of the step size parameter of the subgradient optimization method 1 Computational study of the step size parameter of the subgradient optimization method Mengjie Han 1 Abstract The subgradient optimization method is a simple and flexible linear programming iterative

More information

Scalable Coding of Image Collections with Embedded Descriptors

Scalable Coding of Image Collections with Embedded Descriptors Scalable Coding of Image Collections with Embedded Descriptors N. Adami, A. Boschetti, R. Leonardi, P. Migliorati Department of Electronic for Automation, University of Brescia Via Branze, 38, Brescia,

More information