# Managing the Energy/Performance Tradeoff with Machine Learning

Save this PDF as:

Size: px
Start display at page:

## Transcription

4 4 sleep for is: T t = n p t (i)t i () i= where p t (i) is its current distribution over experts that favors those who have recently predicted accurately. To build intuition for the algorithm we use, we will describe the use of the Fixed-share algorithm, due to [], in this problem. The algorithm we will use employs a collection of Fixed-share algorithms in a hierarchy. The collection is managed by a simple Staticexpert algorithm [0]. Since a direct application of the Static-expert algorithm is also used in our comparisons we will commence with the description of this algorithm, followed by the Fixed-share. In the Static-expert algorithm, which does not model switches between experts, the distribution over experts is updated at each training iteration as follows: p t (i) = Z t p t (i) e L(i,t ) () where L(i, t) denotes the loss of expert i at time t, and Z t normalizes the distribution. The loss captures how the prediction (polling time) relates to the observations and we will describe it in detail in the next section. All the algorithms here are modular in terms of the loss function in the sense that they can accomodate any loss function given as an input to the algorithm. The definition of the loss function can be tailored separately to each application based on its specific objectives. In probabilistic terms, we can relate the loss function also to predictive probabilities. As illustrated in Figure, if we define the probability of predicting observation y t with expert T i as P (y t T i ) = e L(i,t), then the update equation above is consistent with Bayesian estimation, with the identity of the current best polling time (or expert, i) as the unknown variable. it y t p(i t+ i t ) i p (i) = P(i y,...,y ) t+ t+ t y t+ -L(i,t+) p(y i,y,...,y ) = e t+ t Fig.. A generalized Hidden Markov Model (HMM) of probability of the next observation, given past observations, and the current best expert. The Fixed-share algorithm involves modeling the probability that the current best expert switches from one time-step to the next, and updates are thus: p t;α (i) = n p t (j) e L(j,t ) P (i j; α) Z t j= where Z t normalizes the distribution. We initialize the weighting to uniform p (i) = /n, since the learner has no a priori knowledge about the experts, so introducing an arbitrary preference for a given expert could hurt performance if that expert turns out to be a poor predictor. After each update, we renormalize the weights to sum to one, and thus a probability distribution over the experts is maintained. The probability of switching between experts is defined as: { ( α) i = j P (i j; α) = α n i j 0 α is a parameter that indicates how likely it is that switches will occur in the model s estimate of the current best expert. That is, it is the model s estimate of how often the deterministic polling time corresponding to the current best choice changes, in light of the current rate of packet arrivals, or level of traffic burstiness. Using this model in the weight updates, each expert shares a fraction of its weight with the pool of weights. This ensures that no expert will accrue a weight exponentially close to zero, which allows for quick weight-recovery in the event of a switch in the current observations, i.e., network activity. The α parameter determines how rapidly the algorithm adapts to changes: high values correspond to assuming the observation process changes rapidly, whereas values close to zero assume the process is nearly stationary. In the ideal case, the Fixed-share algorithm should use the setting of α that matches the true switching-rate of the non-stationary distribution being observed. Without prior knowledge of the process however, it is hard to set this parameter beforehand. Thus in the algorithm we describe next, this quantity is learned online. B. Learn-α Algorithm We will now describe the Learn- α algorithm [5], [6]. The additional benefit of using Learn- α is that it does not require any prior knowledge or assumptions about the level of non-stationarity. The Fixed-share algorithm uses the parameter α to indicate how likely the model believes switching is to occur between which expert is currently predicting best. The Learn- α algorithm learns this quantity online, simultaneously to learning and adapting its polling time. Before stating the details, we give a conceptual view of the Learn- α algorithm in Figure. The algorithm () (4)

5 5 updates a distribution over α-experts, and each α-expert maintains a distribution over polling times. Alpha experts j=...m p t,j (i) Algorithm p (j) t... This form has the Bayesian interpretation that the regret of the jth sub-algorithm is the negative log-likelihood of the weighted sum over experts of the current observation given the prediction of that expert, where the weighting is given by the jth sub-algorithm s current distribution over experts. Using the per expert loss function L(i, t) that we will define below, we update p top t (j), the distribution over α- experts, according to: p top t (j) = Z t p top t (j)e L(αj,t ) (9)... Polling time experts i=...n where p top (j) = /m, p,j(i) = /n, and p t,j (i) is updated according to (), using α = α j. Fig.. The hierarchy maintained by the Learn- α algorithm, to simultaneously learn the current best polling time, and the degree of nonstationarity in network activity. In the Learn- α algorithm, we maintain m α-experts. The α-experts are Fixed-share sub-algorithms that each maintain a distribution over the experts, p t,j (i) = p t;αj (i), initialized to the uniform distribution. Here α j is the jth sub-algorithm s definition of the probability that the optimal polling time changes from one discrete T i to another, and thus p t,j (i) is updated according to Equation. The Learn- α algorithm chooses its current polling time T t, i.e., its prediction of the current amount of time it ought to sleep, using the weighted mean of the α-expert predictions: T t = m j= p top t (j) T t,j (5) where p top t (j) is the distribution the algorithm maintains over α-experts, and T t,j is the prediction of the jth subalgorithm of how long to sleep for, computed as the weighted mean over the experts: T t,j = n p t,j (i) T i (6) i= The sub-algorithms do not actually sleep for this amount of time though, as the polling time is chosen by the toplevel. The algorithm s polling time is thus T t = m n j= i= p top t (j) p t,j (i) T i (7) We define the loss L(α j, t) of the jth sub-algorithm, or α-expert, as: L(α j, t) = log n p t,j (i) e L(i,t) (8) i= C. Optimal Discretization The number, m, of α j values to use in the algorithm need not be a parameter, as we can use a discretization algorithm for computing the optimal set of values, based on recent work on regret-optimal discretization [6]. Based on this work, only a finite number of α values need to be used in order come close enough to the optimal setting, while minimizing the regret from using too many settings. The discretization is based on uniformly minimizing the regret incurred by the Learn- α algorithm. To use the regret-optimal discretization with respect to any fixed number of learning iterations T, let α be the unknown optimal switching rate, i.e. the setting of the parameter α that (in hindsight) best describes the sequence of observations to be predicted. Then for any threshold δ > 0, we find a minimal set of discrete switching rate parameters {α j } such that the additional regret due to discretization (shown in [6]), T min j D(α α j ), does not exceed δ for any α [0, ]. Here D( ) is the relative entropy between Bernoulli distributions. The value of δ controls the number of discrete levels m and is chosen to balance the ability of the Learn- α algorithm to identify the best discrete choice of the switching rate and the additional loss due to discretization. The optimal a priori choice of δ in this sense is /T, and the detailed computation of m(δ ) is given in [6]. IV. LPSM: LEARNING PSM ALGORITHM A. Objective Function The algorithm as stated is complete except for the choice of loss function, L(i, t). Note however that the performance bounds on the algorithm hold regardless of the choice of loss function. In this application to the wireless power-saving problem, we instantiate the loss function as follows. The objective at each learning iteration is to choose the polling time T t that minimizes both the

7 7 Sleep Time (s); Queue Length (Packets/0) Time (seconds) Sleep Time Queue Size Sleep Time (s); Queue Length (Packets/0) Time (seconds) Sleep Time Queue Size Fig. 4. Evolution of sleep times with LPSM (/T ). Fig. 5. Evolution of sleep times with LPSM (/ log T ). conflicting goals. This can be formulated as an energy minimization problem, subject to the constraint that latency be upper bounded by a threshold. Then γ is the Lagrange multiplier enforcing that constraint. To clarify the relation between using latency as a constraint and including it as a term in the loss, note that increasing γ will monotonically increase the effect of latency on the loss, which the algorithm seeks to minimize. Note that this is one of many possible loss functions that are proportional to the tradeoff that must be optimized for this application. To summarize, the LPSM algorithm proceeds as shown in Figure. V. PERFORMANCE EVALUATION In Section V-A, we study the performance of LPSM with respect to the tradeoff between energy savings and performance degradation using trace-driven simulations. We also compare the performance of LPSM to previously proposed power-saving algorithms []. In Section V-B, we use loss function units to study the behavior of LPSM and other online learning algorithms on traces of real-time Web activity. A. LPSM Performance: Energy/Latency Tradeoff In this section, we study the performance of LPSM with respect to the energy/latency tradeoff. After describing the setup of our ns- simulation, we compare the performance of LPSM with 80. static PSM and BSD []. ) Evaluation Framework: The simulation framework that we used to evaluate LPSM is the same as that which has been used to used to evaluate previous 80. powersaving algorithms such as BSD [], [5]. We briefly summarize that framework here. We use a simple -node Implementation is optimized to be more compact. topology: a mobile client accesses a Web server via an 80. access point. The bandwidth between the mobile host and the basestation is 5 Mbps and the latency is 0. ms; the bandwidth between the basestation and the Web server is 0 Mbps and the latency is 0 ms. As in previous work [], we do not model the details of the 80. MAC protocol. Rather, we model sleep times with some simple modifications to ns- [5]. A sleeping device does not forward any packets, but buffers them until the device wakes up again. A device wakes up whenever it has data to forward to the access point and sleeps for the interval determined by LPSM. It remains awake after polling, only if the link remains active. Since the 80. PSM framework assumes that nodes wakeup and poll only on 00 ms boundaries, for the sake of synchronization issues between sleeping and awake nodes, we rounded the sleep interval determined by LPSM to the nearest 00 ms. We modeled energy consumption using previous estimates []: 750 mw when the interface is awake, 50 mw when the interface is asleep, and.5 mj for each beacon. We report results from an experiment that involved simulating 00 Web transfers from the client to the Web server over approximately.5 hours of simulation time. To verify the robustness of our results, we also ran 4 independent experiments of 500 Web transfers, each over approximately 8.5 hours of simulation time. As in previous work [], we modeled Web browsing behavior using the ns- HTTP-based traffic generator, using FullTcp connections. In this model, a Web transfer consists of several steps: () a client opens a TCP connection and sends a request; () after some delay, the server sends a response and several embedded images; the client may open up to 4 parallel TCP connections to retrieve these images; () the client waits for some amount of think time before making the next request. Also as in previous work [], we randomly selected the parameters for each request based on an empirical distribution [6], and we

8 8 Energy (J) BSD-00 BSD-0 Static-PSM LPSM (/log T) LPSM (/T) Listen Sleep Awake Energy (J) BSD-0 Static-PSM BSD-00 LPSM (/log T) Listen Sleep Awake Fig. 6. Average energy usage per page for various PSM algorithms. Results from 00-page trial. Fig. 8. Average energy per page for various PSM algorithms. Results from 4 independent 500-page trials. Slowdown.5 Slowdown Static-PSM BSD-0 BSD-00 LPSM (/T) LPSM (/log T) 0 Static-PSM BSD-0 BSD-00 LPSM (/log T) Fig. 7. Average slowdown over 80. without power-saving for various PSM algorithms. Results from 00-page trial. Fig. 9. Average slowdown over 80. without power-saving for various PSM algorithms. Results from 4 independent 500-page trials. limited client think time to 000 seconds. In these experiments, we configured LPSM with experts spanning the range from 00 ms to. seconds, at regularly spaced intervals of 00 ms. Thus, the lowest expert, 00 ms, matched the polling time of the current 80. PSM standard, and the highest expert was. seconds. Since a convex combination is upper bounded by its maximum value and lower bounded by its minimum value, we were assured of only using polling times within this range. As discussed in Section IV, the learning function is modular with respect to the loss functon, as it will seek to minimize whichever loss function one uses to encode the energy/slowdown tradeoff. We experimented with the effects of two different loss functions: () the loss function defined in Figure, with the second term as /T ; and () a loss function that uses / log T as its second term, to penalize energy usage more severely. For the first loss function, we used γ = /(. 0 5 ); for the latter, we used γ = /(. 0 ). ) Algorithm Behavior: Figures 4 and 5 show the evolution of sleep times over a portion of the simulation trace. When LPSM discovers that packets are queued at the access point, it quickly reduces the sleep interval to 00 ms. During periods of inactivity, LPSM continually increases the sleep time each time the device wakes up and discovers that no packets are queued for it. These two figures also illustrate how the choice of loss function affects the behavior of the learning algorithm. When the learning algorithm has a loss function that has /T as its energy term, it backs off much more slowly than when it uses a loss function with / log T as its energy term. This makes sense: the latter loss function places relatively more importance on the energy term, the penalty incurred for waking up when no packets are queued for the device. ) Performance: Energy vs. Latency: Figures 6 and 7 characterize the energy savings and slowdown of LPSM relative to static PSM, which we ran under the same simulation, using the implementation by [5]. As in previous work, the slowdown is the ratio of the latency incurred by the PSM algorithm to the latency incurred by using

9 Slowdown.5 Slowdown.5 Slowdown Page retrieval time with PSM off (sec) Page retrieval time with PSM off (sec) Page retrieval time with PSM off (sec) (a) Static PSM. (b) LPSM (/T ) (c) LPSM (/ log T ) Fig. 0. Average slowdown vs. time to download that page without power-saving. Average slowdown for the entire experiment is shown with a horizontal line. LPSM (/T ) imposes only a slight increase in slowdown over static PSM. Using a loss function with the energy term / log T saves more energy at the cost of increased slowdown; however, it never increases slowdown over static PSM by more than a factor of for a given page. no PSM whatsoever. Our first result is that LPSM consistently decreased per-page energy consumption. For a slight increase, %, in slowdown over static PSM, LPSM, with /T as the energy model, reduces overall energy consumption by about 7% (from.6 J to.8 J), and energy due to beaconing by almost 0% (from 0.8 J to 0.59 J). On the other hand, LPSM with / log T as its energy term, reduces overall energy consumption by nearly 0% (from.6 J to.95 J) and energy consumption due to beaconing by more than 80% (from 0.8 J to 0.6 J), while increasing slowdown over static PSM by only a factor of.9. To verify the robustness of our results, we also ran longer simulations for LPSM with / log T as the energy term. Figures 8 and 9 show results averaged over 4 independent 500-page experiments, each one starting the traffic simulator with a different random seed. We also ran static PSM in each of these four simulations, and averaged its results, for a fair comparison. This experiment shows similar results: energy savings of over 8% (from 4. J to.6 J) and an 8% reduction in energy consumption due to beaconing (from 0.94 J to 0.7 J), while increasing slowdown over static PSM by a factor of only.. Figures 6-9 also compare the performace of LPSM with BSD. We ran BSD in the same simulation for both scenarios described above, using the original BSD implementation [5], including averaging over the four runs with the same set of random seeds for traffic generation as used on LPSM. LPSM shows an energy reduction versus most settings of BSD, though higher slowdowns. Notably, the LPSM using / log T to model energy, had deeper energy savings than all the previous BSD settings, though it increased slowdown more. It is important to note that LPSM can work well even if the distribution generating the observations of network activity changes with time. Since in simulation, traffic was generated using a stationary distribution, we would expect only better results against BSD in practice. Additionally, in both settings, we ran with m optimized for a timescale of 45 minutes. We could expect better results, especially in the longer trials, had it been optimized for the length of the trial. Yet these results lend validity to using LPSM even under resource limitations. Moreover, LPSM offers several unique advantages over existing approaches. First, because LPSM s determination of appropriate sleep times is based on a loss function, rather than a single parameter, LPSM provides designers sufficiently more flexibility than BSD in exploring the energy/performance tradeoff. It should be possible to simulataneously reduce both energy and slowdown from that of previous approaches, by appropriate choice and calibration of the loss function. Second, because LPSM uses a learning algorithm designed to react well under nonstationarity, we would expect LPSM to perform better than BSD when the distribution generating traffic changes over time, a situation not modeled in this simulation, but which is realistic in practice. These hypotheses deserve further attention and present interesting possibilities for future work. Figure 0 shows the slowdown behavior of various power-saving algorithms for individual Web page downloads. LPSM imposes only a moderate increase in slowdown over static PSM for various Web pages: only a % increase on average, but never more than 0%. Changing the loss function in LPSM to favor energy reduction more will obviously increase the slowdown, as shown in Figure 0(c), but even using this loss function never incurs a slowdown of more than a factor of for any given Web page. For longer Web transfers, both LPSM algorithms introduce only negligible slowdown. Some shorter Web transfers impose longer slowdown times than static

10 0.5 x Energy (J)....9 LPSM (/Log T) LPSM (/T) Slowdown loss of each expert. loss of algorithm circled Fig.. Performance of LPSM with different loss functions. Modeling energy usage as /T is shown by the top curve, and as / log T by the bottom curve. Points on a curve indicate varying settings of γ. PSM, but these download times are short anyway, so the additional slowdown that LPSM imposes is not drastic. Varying the latency constraint in the Lagrange optimization (i.e., varying γ) and varying the functional form of the loss function give LPSM flexibility with respect to the tradeoff between energy and latency. For example, Figure shows the performance of LPSM for various settings of γ and the loss function; each point in this figure represents a 00-page simulation experiment. Clearly, calibrating the correct γ value is important for achieving good slowdown performance for a given level of energy consumption. Additionally, varying the functional form has significant effects on the behavior of the learning algorithm: using /T as the energy term in the loss function achieves better slowdown performance than using / log T, at the cost of more energy consumption. Again, note that LPSM using either loss function allows significant energy savings over static PSM with only moderate additional slowdown. B. Trace-based Analysis of LPSM and Online Learning Algorithms We also ran LPSM on traces of real network activity [6]. This is different than the trace-driven simulation because the distribution for traffic generation was created by averaging over the traces [5]. So running the LPSM algorithm on the traces themselves, outside of a simulation, would allow it to observe real network activity, with potentially less stationarity of traffic than from the fixed distribution used in the simulation environment. We present these results to further illustrate the behavior of LPSM versus 80. PSM as well as other online learning algorithms in this problem domain. We first illustrate the behavior of Fixed-share. Learn- α then runs with the current best linear combination of the existing Fixed- loss of each expert. loss of algorithm circled x time (ms) time (ms) Fig.. These figures show the loss of each expert as a function of time. The circled path is the loss of the algorithm. The bottom figure zooms in on the earlier iterations. share sub-algorithms distributions over experts at any given time. The Fixed-share algorithm we illustrate below is running with α = 0.0. We used publicly available traces of network activity from a UC Berkeley home dial-up server that monitored users accessing HTTP files from home [6] (the same trace we used to synthetically generate Web traffic for the ns- simulations). Because the traces only provided the start and end times, and number of bytes transferred for each connection, per connection we smoothed the total number of bytes uniformly over 0 ms intervals spanning its duration. In the network trace experiment results below, we ran with 0 experts spanning the range of 000 ms at regularly spaced intervals of 00 ms, and γ =.0 0 7, calibrated to attain polling times within the range of the existing protocol. x 0 4

11 ) Behavior of Online Learning Algorithms: Figure shows the loss, on one trace, of each of the experts as a function of time, measured at the algorithm s current polling times, where the circled path is the loss of the algorithm. Since the algorithm allows for switching between the experts, it is able to maintain loss close to or better than the loss of the best current expert, even though our results show that the identity of the best current expert changes with time. The bottom graph, which zooms in on the early phases of the run, shows that it takes some time before the algorithm is doing as well as the best current expert. Note however that since the weights start as uniform, the initial polling time is actually the mean of the polling times of each of the experts, so even in the early iterations the algorithm already beats the worse half of the experts, and still tracks the average expert. Figure shows the evolution of the distribution that the algorithm maintains over experts.as expected, the algorithm allows for switching its weights on the experts, based on which experts would currently be performing best in the observed network activity. Changes in the burstiness level of the arriving bytes are reflected in shifts in which expert currently has the highest weight, starting even in the early learning iterations. The bottom figure, which zooms in on the earlier phases of learning, shows that the algorithm s preference for a given expert can easily decrease and increase in light of network activity. For example after several periods of being the worst expert, after iteration 600, an expert is able to regain weight as the best expert, as its weight never went exponentially to zero. These results also confirm that for real network traffic, no single deterministic setting of the polling time works well all the time. The algorithm can do better than any single expert, as it is a convex combination of all the experts, and the updates of the weighting allow the algorithm to track the expert that is currently best. Note that these online adjustments of which expert the algorithm currently favors are closely reflected in the evolution of sleep times graphed in Section V-A, Figures 4 and 5. ) Competitive Analysis: To perform competitive analysis of the learning algorithm, we compare it to the hindsight best expert, as well as the current best expert at each time. Figure 4 illustrates how the learning algorithm does in relation to the best fixed expert computed in hindsight, and to the current best expert at each time epoch at which the algorithm does a learning update. The algorithm learns to track and actually do better than the best fixed expert, in the first figure. In the second figure, this scenario is an approximation to the best k-partition,(i.e. the best partition of the trace into k parts and choice of expert for each part) where k is equal to the length of the full time horizon of the trace. Note that as k increases, the optimal k-partition is harder to track, current weight on each expert (deterministic setting of T) current weight on each expert (deterministic setting of T) training iteration (on waking) training iteration (on waking) Fig.. These figures show the weights that the algorithm maintains on each expert, per training iteration. The bottom figure zooms in on the earlier iterations. since it is the limit of non-stationarity of the process, so the above is close to the hardest partitioning. Here the algorithm does not beat the loss of the sequence of best experts, but is at least able to track it, as per the bounds [0], []. We see that the algorithm s performance is never too far away from that of the current best expert. Even in the first figure, while it is unsurprising that the algorithm that allows switching between the experts can beat the single best expert, the intuition of why it performs better than the performance bound is as follows. The performance bound is a worst-case upper bound on the regret of the algorithm, which need not be met in practice. In the worst-case framework, an adversary gives the learning algorithm a bank of experts as a black box. The algorithm does not know anything about the true mechanism by which the experts make predictions. In our case however, in applying the algorithm, we were able to choose the set of experts to best apply to this problem domain. Thus by choosing experts that actually have a semantic:

12 4 x loss of best fixed expert, blue. loss of algorithm red circled.5.5 Cumulative loss arbitrary expert (500ms) Fixed share(alpha) alg best expert (00ms) IEE 80. Protocol alg Static expert alg Learn alpha(delta*).5 0 time (ms) x x alpha best expert (00ms) IEE 80. Protocol alg 000 loss of best current expert, blue. loss of algorithm red circled Cumulative loss Fixed share(alpha) alg Static expert alg Learn alpha(delta*) time (ms) Fig. 4. Competitive Analysis. Loss of the algorithm versus time, circled. Solid is loss of the best fixed expert (top), and loss of the current best expert per training epoch (bottom). x alpha x 0 Fig. 5. Cumulative loss comparison. The curve indicates the cumulative loss of Fixed-share( α) along the range of α values. We compare to the cumulative loss on the same trace of the 80. PSM protocol, Static-expert, and Learn- α. The bottom figure zooms in on the first 0.00 of the range for α. a discretization of the range of values we would like to consider for polling time, the algorithm can actually perform better than any of the individual experts, as it strives to be the best (cumulative loss minimizing) linear combination of the experts at any given timestep. Thus it is able to adaptively smooth over the discrete values that the experts represent, and perform with the current best smoothed value in the desired polling time range. ) LPSM advantages: Now we focus on Learn- α, the algorithm used for LPSM, which maintains a bank of sub-algorithms of the type analyzed above. The strength of this algorithm is that it learns α online while performing the learning task, and the empirical gains of this are shown by the next figures. Figure 5 compares the cumulative losses of the various algorithms on a 4 hour trace, with observation epochs every 0 ms. This corresponds to approximately 6,00 training iterations for the learning algorithms. Note that in the typical online learning scenario, T, the number of learning iterations, or time horizon, for which one would like to benchmark the performance of the algorithm, is just the number of observation epochs. In this application, the number of training epochs need not match the number of observation epochs, since the application involves sleeping during many observation epochs, and learning is only done upon awakening. Since in these experiments the performance of the learning algorithms (Static-expert, Fixed-share, and Learn- α) are compared by each algorithm using n experts spanning the range of 000 ms at regularly spaced intervals of 00 ms, in order to get an a priori estimate of T, we assume a mean sleep interval of 550 ms, i.e. the mean of the experts. It is important to note however that the polling times of each of the runs

### Minimizing Energy for Wireless Web Access with Bounded Slowdown

Wireless Networks 11, 135 148, 2005 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Minimizing Energy for Wireless Web Access with Bounded Slowdown RONNY KRASHINSKY and HARI

### Using Traffic Models in Switch Scheduling

I. Background Using Traffic Models in Switch Scheduling Hammad M. Saleem, Imran Q. Sayed {hsaleem, iqsayed}@stanford.edu Conventional scheduling algorithms use only the current virtual output queue (VOQ)

### Tuning RED for Web Traffic

Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

### A Reinforcement Learning Approach to Automated GUI Robustness Testing

A Reinforcement Learning Approach to Automated GUI Robustness Testing Sebastian Bauersfeld and Tanja E. J. Vos Universitat Politècnica de València, Camino de Vera s/n, 46022, Valencia, Spain {sbauersfeld,tvos}@pros.upv.es

### The War Between Mice and Elephants

The War Between Mice and Elephants Liang Guo and Ibrahim Matta Computer Science Department Boston University 9th IEEE International Conference on Network Protocols (ICNP),, Riverside, CA, November 2001.

Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication

### Random Early Detection Gateways for Congestion Avoidance

Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson Lawrence Berkeley Laboratory University of California floyd@eelblgov van@eelblgov To appear in the August 1993 IEEE/ACM

### SUMMERY, CONCLUSIONS AND FUTURE WORK

Chapter - 6 SUMMERY, CONCLUSIONS AND FUTURE WORK The entire Research Work on On-Demand Routing in Multi-Hop Wireless Mobile Ad hoc Networks has been presented in simplified and easy-to-read form in six

### Wireless Networks (CSC-7602) Lecture 8 (22 Oct. 2007) Seung-Jong Park (Jay) Fair Queueing

Wireless Networks (CSC-7602) Lecture 8 (22 Oct. 2007) Seung-Jong Park (Jay) http://www.csc.lsu.edu/~sjpark Fair Queueing 2 Today Wireline Queue Drop Wireless Queue Drop 3 Types of Congestion Control Strategies

### A More Realistic Energy Dissipation Model for Sensor Nodes

A More Realistic Energy Dissipation Model for Sensor Nodes Raquel A.F. Mini 2, Antonio A.F. Loureiro, Badri Nath 3 Department of Computer Science Federal University of Minas Gerais Belo Horizonte, MG,

### 15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

### Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

### Investigating MAC-layer Schemes to Promote Doze Mode in based WLANs

Investigating MAC-layer Schemes to Promote Doze Mode in 802.11-based WLANs V. Baiamonte and C.-F. Chiasserini CERCOM - Dipartimento di Elettronica Politecnico di Torino Torino, Italy Email: baiamonte,chiasserini

### Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

### CSMA based Medium Access Control for Wireless Sensor Network

CSMA based Medium Access Control for Wireless Sensor Network H. Hoang, Halmstad University Abstract Wireless sensor networks bring many challenges on implementation of Medium Access Control protocols because

### 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

### Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

### Intra and Inter Cluster Synchronization Scheme for Cluster Based Sensor Network

Intra and Inter Cluster Synchronization Scheme for Cluster Based Sensor Network V. Shunmuga Sundari 1, N. Mymoon Zuviria 2 1 Student, 2 Asisstant Professor, Computer Science and Engineering, National College

### Parallel Programming Interfaces

Parallel Programming Interfaces Background Different hardware architectures have led to fundamentally different ways parallel computers are programmed today. There are two basic architectures that general

### Throughput Maximization for Energy Efficient Multi-Node Communications using Actor-Critic Approach

Throughput Maximization for Energy Efficient Multi-Node Communications using Actor-Critic Approach Charles Pandana and K. J. Ray Liu Department of Electrical and Computer Engineering University of Maryland,

### CSE151 Assignment 2 Markov Decision Processes in the Grid World

CSE5 Assignment Markov Decision Processes in the Grid World Grace Lin A484 gclin@ucsd.edu Tom Maddock A55645 tmaddock@ucsd.edu Abstract Markov decision processes exemplify sequential problems, which are

### A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches

A Pipelined Memory Management Algorithm for Distributed Shared Memory Switches Xike Li, Student Member, IEEE, Itamar Elhanany, Senior Member, IEEE* Abstract The distributed shared memory (DSM) packet switching

### !! What is virtual memory and when is it useful? !! What is demand paging? !! When should pages in memory be replaced?

Chapter 10: Virtual Memory Questions? CSCI [4 6] 730 Operating Systems Virtual Memory!! What is virtual memory and when is it useful?!! What is demand paging?!! When should pages in memory be replaced?!!

### Performance Analysis of TCP Variants

102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

### Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

### Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

### Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely

### Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

### CS244 Advanced Topics in Computer Networks Midterm Exam Monday, May 2, 2016 OPEN BOOK, OPEN NOTES, INTERNET OFF

CS244 Advanced Topics in Computer Networks Midterm Exam Monday, May 2, 2016 OPEN BOOK, OPEN NOTES, INTERNET OFF Your Name: Answers SUNet ID: root @stanford.edu In accordance with both the letter and the

### Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

### Achieve Significant Throughput Gains in Wireless Networks with Large Delay-Bandwidth Product

Available online at www.sciencedirect.com ScienceDirect IERI Procedia 10 (2014 ) 153 159 2014 International Conference on Future Information Engineering Achieve Significant Throughput Gains in Wireless

### Accurate and Energy-efficient Congestion Level Measurement in Ad Hoc Networks

Accurate and Energy-efficient Congestion Level Measurement in Ad Hoc Networks Jaewon Kang Computer Science Rutgers University jwkang@cs.rutgers.edu Yanyong Zhang Electrical & Computer Engineering Rutgers

### Analytic Performance Models for Bounded Queueing Systems

Analytic Performance Models for Bounded Queueing Systems Praveen Krishnamurthy Roger D. Chamberlain Praveen Krishnamurthy and Roger D. Chamberlain, Analytic Performance Models for Bounded Queueing Systems,

### Impact of IEEE MAC Packet Size on Performance of Wireless Sensor Networks

IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 3, Ver. IV (May - Jun.2015), PP 06-11 www.iosrjournals.org Impact of IEEE 802.11

### Predictive Indexing for Fast Search

Predictive Indexing for Fast Search Sharad Goel, John Langford and Alex Strehl Yahoo! Research, New York Modern Massive Data Sets (MMDS) June 25, 2008 Goel, Langford & Strehl (Yahoo! Research) Predictive

### Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13 Department of Electrical Engineering and Computer Sciences University of California Berkeley Review: Switch Architectures Input Queued

### Quest for High-Performance Bufferless NoCs with Single-Cycle Express Paths and Self-Learning Throttling

Quest for High-Performance Bufferless NoCs with Single-Cycle Express Paths and Self-Learning Throttling Bhavya K. Daya, Li-Shiuan Peh, Anantha P. Chandrakasan Dept. of Electrical Engineering and Computer

### Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

### A Dynamic Disk Spin-down Technique for Mobile Computing

A Dynamic Disk Spin-down Technique for Mobile Computing David P. Helmbold, Darrell D. E. Long and Bruce Sherrod Department of Computer Science University of California, Santa Cruz Abstract We address the

### Symphony: An Integrated Multimedia File System

Symphony: An Integrated Multimedia File System Prashant J. Shenoy, Pawan Goyal, Sriram S. Rao, and Harrick M. Vin Distributed Multimedia Computing Laboratory Department of Computer Sciences, University

### A Strategy to Manage Cache Coherency in a Distributed Mobile Wireless Environment

A Strategy to Manage Cache Coherency in a Distributed Mobile Wireless Environment Anurag Kahol, Sumit Khurana, Sandeep K. S. Gupta, and Pradip K. Srimani Shumei Chen Hoang Nguyen Jeff Wilson Team Members

### Effects of Sensor Nodes Mobility on Routing Energy Consumption Level and Performance of Wireless Sensor Networks

Effects of Sensor Nodes Mobility on Routing Energy Consumption Level and Performance of Wireless Sensor Networks Mina Malekzadeh Golestan University Zohre Fereidooni Golestan University M.H. Shahrokh Abadi

### 10. Network dimensioning

Partly based on slide material by Samuli Aalto and Jorma Virtamo ELEC-C7210 Modeling and analysis of communication networks 1 Contents Introduction Parameters: topology, routing and traffic Dimensioning

### DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

### Congestion Avoidance

COMP 631: NETWORKED & DISTRIBUTED SYSTEMS Congestion Avoidance Jasleen Kaur Fall 2016 1 Avoiding Congestion: Strategies TCP s strategy: congestion control Ø Control congestion once it occurs Repeatedly

### Analysis of TCP Latency over Wireless Links Supporting FEC/ARQ-SR for Error Recovery

Analysis of TCP Latency over Wireless Links Supporting FEC/ARQ-SR for Error Recovery Raja Abdelmoumen CRISTAL Laboratory, Tunisia Email: Raja.Abdelmoumen@ensi.rnu.tn Chadi Barakat Projet Planète, INRIA-Sophia

### A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver

1 A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver Jungmin So Dept. of Computer Science, and Coordinated Science Laboratory University of Illinois

### Frame Relay. Frame Relay: characteristics

Frame Relay Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Network management and QoS provisioning - 1 Frame Relay: characteristics Packet switching

### Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

Although this paper analyzes shaping with respect to its benefits on search problems, the reader should recognize that shaping is often intimately related to reinforcement learning. The objective in reinforcement

### Sleep/Wake Aware Local Monitoring (SLAM)

Sleep/Wake Aware Local Monitoring (SLAM) Issa Khalil, Saurabh Bagchi, Ness Shroff Dependable Computing Systems Lab (DCSL) & Center for Wireless Systems and Applications (CWSA) School of Electrical and

### Tradeoff between coverage of a Markov prefetcher and memory bandwidth usage

Tradeoff between coverage of a Markov prefetcher and memory bandwidth usage Elec525 Spring 2005 Raj Bandyopadhyay, Mandy Liu, Nico Peña Hypothesis Some modern processors use a prefetching unit at the front-end

### MAC Theory. Chapter 7. Ad Hoc and Sensor Networks Roger Wattenhofer

MAC Theory Chapter 7 7/1 Seeing Through Walls! [Wilson, Patwari, U. Utah] Schoolboy s dream, now reality thank to sensor networks... 7/2 Rating Area maturity First steps Text book Practical importance

### Threshold-Based Markov Prefetchers

Threshold-Based Markov Prefetchers Carlos Marchani Tamer Mohamed Lerzan Celikkanat George AbiNader Rice University, Department of Electrical and Computer Engineering ELEC 525, Spring 26 Abstract In this

### COMPUTER NETWORK PERFORMANCE. Gaia Maselli Room: 319

COMPUTER NETWORK PERFORMANCE Gaia Maselli maselli@di.uniroma1.it Room: 319 Computer Networks Performance 2 Overview of first class Practical Info (schedule, exam, readings) Goal of this course Contents

### Markov Model Based Congestion Control for TCP

Markov Model Based Congestion Control for TCP Shan Suthaharan University of North Carolina at Greensboro, Greensboro, NC 27402, USA ssuthaharan@uncg.edu Abstract The Random Early Detection (RED) scheme

### Sensor Deployment, Self- Organization, And Localization. Model of Sensor Nodes. Model of Sensor Nodes. WiSe

Sensor Deployment, Self- Organization, And Localization Material taken from Sensor Network Operations by Shashi Phoa, Thomas La Porta and Christopher Griffin, John Wiley, 2007 5/20/2008 WiSeLab@WMU; www.cs.wmich.edu/wise

### Overcompressing JPEG images with Evolution Algorithms

Author manuscript, published in "EvoIASP2007, Valencia : Spain (2007)" Overcompressing JPEG images with Evolution Algorithms Jacques Lévy Véhel 1, Franklin Mendivil 2 and Evelyne Lutton 1 1 Inria, Complex

### Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

### Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

### Admission Control in Time-Slotted Multihop Mobile Networks

dmission ontrol in Time-Slotted Multihop Mobile Networks Shagun Dusad and nshul Khandelwal Information Networks Laboratory Department of Electrical Engineering Indian Institute of Technology - ombay Mumbai

### THE preceding chapters were all devoted to the analysis of images and signals which

Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

### Online Facility Location

Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

### Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

### Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

### A Framework for Clustering Massive Text and Categorical Data Streams

A Framework for Clustering Massive Text and Categorical Data Streams Charu C. Aggarwal IBM T. J. Watson Research Center charu@us.ibm.com Philip S. Yu IBM T. J.Watson Research Center psyu@us.ibm.com Abstract

### Distributed Computing over Communication Networks: Leader Election

Distributed Computing over Communication Networks: Leader Election Motivation Reasons for electing a leader? Reasons for not electing a leader? Motivation Reasons for electing a leader? Once elected, coordination

### Choosing Beacon Period for Improved Response Time for Wireless HTTP Clients

Choosing Beacon Period for Improved Response Time for Wireless HTTP Clients Zachary Anderson Suman Nath Srinivasan Seshan Department of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh,

### Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

### Maintaining Mutual Consistency for Cached Web Objects

Maintaining Mutual Consistency for Cached Web Objects Bhuvan Urgaonkar, Anoop George Ninan, Mohammad Salimullah Raunak Prashant Shenoy and Krithi Ramamritham Department of Computer Science, University

### Localization approaches based on Ethernet technology

Localization approaches based on Ethernet technology Kees den Hollander, GM Garner, Feifei Feng, Paul Jeong, Eric H.S. Ryu Contact: Eric H.S. Ryu eric_ryu@samsung.com Abstract This document describes two

### Dell OpenManage Power Center s Power Policies for 12 th -Generation Servers

Dell OpenManage Power Center s Power Policies for 12 th -Generation Servers This Dell white paper describes the advantages of using the Dell OpenManage Power Center to set power policies in a data center.

### Supporting Service Differentiation for Real-Time and Best-Effort Traffic in Stateless Wireless Ad-Hoc Networks (SWAN)

Supporting Service Differentiation for Real-Time and Best-Effort Traffic in Stateless Wireless Ad-Hoc Networks (SWAN) G. S. Ahn, A. T. Campbell, A. Veres, and L. H. Sun IEEE Trans. On Mobile Computing

### On Pairwise Connectivity of Wireless Multihop Networks

On Pairwise Connectivity of Wireless Multihop Networks Fangting Sun and Mark Shayman Department of Electrical and Computer Engineering University of Maryland, College Park, MD 2742 {ftsun, shayman}@eng.umd.edu

### Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

### Energy-Efficient Receiver-Driven Wireless Mesh Sensor Networks

Sensors 2011, 11, 111-137; doi:10.3390/s110100111 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Energy-Efficient Receiver-Driven Wireless Mesh Sensor Networks Daichi Kominami

### Delay Analysis of ML-MAC Algorithm For Wireless Sensor Networks

Delay Analysis of ML-MAC Algorithm For Wireless Sensor Networks Madhusmita Nandi School of Electronics Engineering, KIIT University Bhubaneswar-751024, Odisha, India ABSTRACT The present work is to evaluate

### OPSM - Opportunistic Power Save Mode for Infrastructure IEEE WLAN

OPSM - Opportunistic Power Save Mode for Infrastructure IEEE 82.11 WLAN Pranav Agrawal, Anurag Kumar,JoyKuri, Manoj K. Panda, Vishnu Navda, Ramachandran Ramjee Centre for Electronics Design and Technology

### TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

### Hardware-Software Codesign

Hardware-Software Codesign 8. Performance Estimation Lothar Thiele 8-1 System Design specification system synthesis estimation -compilation intellectual prop. code instruction set HW-synthesis intellectual

### Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, and Christos G. Cassandras, Fellow, IEEE

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 12, DECEMBER 2010 2735 Asynchronous Distributed Optimization With Event-Driven Communication Minyi Zhong, Student Member, IEEE, Christos G. Cassras,

### Fast Location-based Association of Wi-Fi Direct for Distributed Wireless Docking Services

Fast Location-based Association of Wi-Fi Direct for Distributed Wireless Docking Services Jina Han Department of Information and Computer Engineering Ajou University Suwon, South Korea hangn0808@ajou.ac.kr

### To earn the extra credit, one of the following has to hold true. Please circle and sign.

CS 188 Spring 2011 Introduction to Artificial Intelligence Practice Final Exam To earn the extra credit, one of the following has to hold true. Please circle and sign. A I spent 3 or more hours on the

### Monetary Cost and Energy Use Optimization in Divisible Load Processing

2004 Conference on Information Sciences and Systems, Princeton University, March 17 19, 2004 Monetary Cost and Energy Use Optimization in Divisible Load Processing Mequanint A. Moges, Leonardo A. Ramirez,

### Wireless TCP Performance Issues

Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the

### Enabling Distributed Simulation of OMNeT++ INET Models

Enabling Distributed Simulation of OMNeT++ INET Models Mirko Stoffers, Ralf Bettermann, James Gross, Klaus Wehrle Communication and Distributed Systems, RWTH Aachen University School of Electrical Engineering,

### THE INCREASING popularity of wireless networks

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 3, NO. 2, MARCH 2004 627 Accurate Analysis of TCP on Channels With Memory and Finite Round-Trip Delay Michele Rossi, Member, IEEE, Raffaella Vicenzi,

### Flow Control. Flow control problem. Other considerations. Where?

Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

### Latency on a Switched Ethernet Network

FAQ 07/2014 Latency on a Switched Ethernet Network RUGGEDCOM Ethernet Switches & Routers http://support.automation.siemens.com/ww/view/en/94772587 This entry is from the Siemens Industry Online Support.

### Maximizing Network Utilization with Max-Min Fairness in Wireless Sensor Networks

1 Maximizing Network Utilization with Max-Min Fairness in Wireless Sensor Networks Avinash Sridharan and Bhaskar Krishnamachari {asridhar,bkrishna}@usc.edu Department of Electrical Engineering University

### Complexity Measures for Map-Reduce, and Comparison to Parallel Computing

Complexity Measures for Map-Reduce, and Comparison to Parallel Computing Ashish Goel Stanford University and Twitter Kamesh Munagala Duke University and Twitter November 11, 2012 The programming paradigm

### Simulation & Performance Analysis of Mobile Ad-Hoc Network Routing Protocol

Simulation & Performance Analysis of Mobile Ad-Hoc Network Routing Protocol V.S.Chaudhari 1, Prof.P.N.Matte 2, Prof. V.P.Bhope 3 Department of E&TC, Raisoni College of Engineering, Ahmednagar Abstract:-

### An Energy Consumption Analytic Model for A Wireless Sensor MAC Protocol

An Energy Consumption Analytic Model for A Wireless Sensor MAC Protocol Hung-Wei Tseng, Shih-Hsien Yang, Po-Yu Chuang,Eric Hsiao-Kuang Wu, and Gen-Huey Chen Dept. of Computer Science and Information Engineering,

### Computational complexity

Computational complexity Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Definitions: problems and instances A problem is a general question expressed in

### Selective Fill Data Cache

Selective Fill Data Cache Rice University ELEC525 Final Report Anuj Dharia, Paul Rodriguez, Ryan Verret Abstract Here we present an architecture for improving data cache miss rate. Our enhancement seeks

### Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

### On Improving the Performance of Cache Invalidation in Mobile Environments

Mobile Networks and Applications 7, 291 303, 2002 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. On Improving the Performance of Cache Invalidation in Mobile Environments GUOHONG CAO