ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem

Size: px
Start display at page:

Download "ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem"

Transcription

1 1 ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem Jules White, Brian Doughtery, and Douglas C. Schmidt Vanderbilt University, EECS Department, Nashville, TN, USA {jules, briand, Abstract Search-based software engineering is an emerging paradigm that uses automated search algorithms to help designers iteratively find solutions to complicated design problems. For example, when designing a climate monitoring satellite, designers may want to use the minimal amount of computing hardware to reduce weight and cost, while supporting the image processing algorithms running onboard. A key problem in these situations is that the hardware and software design are locked in a tightly-coupled cost-constrained producer/consumer relationship that makes it hard to find a good hardware/software design configuration. Search-based software engineering can be used to apply algorithmic techniques to automate the search for hardware/software designs that maximize the image processing accuracy while respecting cost constraints. This paper provides the following contributions to research on searchbased software engineering: (1) we show how a cost-constrained producer/consumer problem can be modeled as a set of two multidimensional multiple-choice knapsack problems (MMKPs), (2) we present a polynomial-time search-based software engineering technique, called the Allocation-baSed Configuration Exploration Technique (ASCENT), for finding near optimal hardware/software co-design solutions, and (3) we present empirical results showing that ASCENT s solutions average 95%+ of the optimal solution s value. 1 INTRODUCTION Current trends and challenges. Increasing levels of programming abstraction, middleware, and other software advancements have expanded the scale and complexity of software systems that we can develop. At the same time, the ballooning scale and complexity have created a problem where systems are becoming so large that their design and development can no longer be optimized manually. Current large-scale systems can contain an exponential number of potential design configurations and vast numbers of constraints ranging from security to performance requirements. Systems of this scale and complexity coupled with the increasing importance of non-functional characteristics [8] (such as end-to-end response time) are making software design processes increasingly expensive [25]. Search-based software engineering [15, 14] is an emerging discipline that aims to decrease the cost of optimizing system design by using algorithmic search techniques, such as genetic algorithms or simulated annealing, to automate the design search. In this paradigm, rather than performing the search manually, designers iteratively produce a design by using a search technique to find designs that optimize a specific system quality while adhering to design constraints. Each time a new design is produced, designers can use the knowledge they have gleaned from the new design solution to craft more precise constraints to guide the next design search. Search-based software engineering has been applied to the design of a number of software engineering aspects, ranging from generating test data [24] to project management and staffing [5, 3] to software security [9]. A common theme in domains where search-based software engineering is applied is that the design solution space is so large and tightly constrained that the time required to find an optimal solution grows at an exponential rate with the problem size. These vast and constrained solutions spaces make it hard for designers to derive good solutions manually. This paper examines a common problem from the domain of distributed real-time and embedded (DRE) systems that exhibits these complexity characteristics. The problem we focus on is the need to derive a design that maximizes a specific system capability subject to constraints on cost and the production and consumption of resources, such as RAM, by the hardware and software, respectively. For example, when designing a satellite to earth s magnetosphere [11], the goal may be to maximize the accuracy of the sensor data processing algorithms on the satellite without exceeding the development budget and hardware resources. Ideally, to maximize the capabilities of the system for a given cost, system software and hardware should be designed in tandem to produce a design with a precise fit between hardware capabilities and software resource demands. The more precise the fit, the less budget is expended on excess hardware resource capacity. A key problem in these design scenarios is that they create a complex cost-constrained producer/consumer problem involving the software and hardware design. The hardware design determines the resources, such as processing power and memory, that are available to the software. Likewise, the hardware consumes a portion of the project budget and thus reduces resources remaining for the software (assuming a fixed budget). The software also consumes a portion of the budget and the resources produced by the hardware configuration. The perceived value of system comes from the attributes of the

2 2 software design, e.g., image processing accuracy in the satellite example. The intricate dependencies between the hardware and software s production and consumption of resources, cost, and value makes the design solution space so large and complex that finding an optimal and valid design configuration is hard. Solution approach Automated Solution Space Exploration. This paper presents a heuristic search-based software engineering technique, called the Allocation-baSed Configuration Exploration Technique (ASCENT), for solving costconstrained hardware/software producer/consumer co-design problems. ASCENT models these co-design problems as two separate knapsack problems [19]. Since knapsack problems are NP-Hard [1], ASCENT uses heuristics to reduce the solution space size and iteratively search for near optimal designs by adjusting the budget allocations to software and hardware. In addition to outputting the best design found, ASCENT also generates a data set representing the trends it discovered in the solution space. A key attribute of the ASCENT technique is that, in the process of solving, it generates a large number of optimal design configurations that present a wide view of the trends and patterns in a system s design solution space. This paper shows how this wide view of trends in the solution space can be used to iteratively search for near optimal co-design solutions. Moreover, our empirical results show that ASCENT produces co-design configurations that average 95%+ optimal for problems with more than 7 points of variability in each of the hardware and software design spaces. Paper organization. The remainder of this paper is organized as follows: Section 2 presents a motivating examples of a satellite hardware/software co-design problem; Section 3 discusses the challenges of solving software/-hardware codesign problems in the context of this motivating example; Section 4 describes the ASCENT heuristic algorithm; Section 5 analyzes empirical results from experiments we performed with ASCENT; Section 6 compares ASCENT with related work; and Section 7 presents concluding remarks and lessons learned from our work with ASCENT. 2 MOTIVATING EXAMPLE This section presents a satellite design example to motivate the need to expand search-based software engineering techniques to encompass cost-constrained hardware/software producer/- consumer co-design problems. Designing satellites, such as the satellite for NASA s Magnetospheric Multiscale (MMS) mission [11], requires carefully balancing hardware/software design subject to tight budgets. Figure 1 shows a satellite with a number of possible variations in software and hardware design. For example, the software design has a point of variability where a designer can select the resolution of the images that are processed. Processing higher resolution images improves the accuracy but requires more RAM and CPU cycles. Another point of variability in the software design is the image processing algorithms that can be used to identify characteristics of the images captured by the satellite s cameras. The algorithms each provide a distinct level of accuracy, while Fig. 1. Software/Hardware Design Variability in a Satellite also consuming different quantities of RAM and CPU cycles. The underlying hardware has a number of points of variability that can be used to increase or decrease the RAM and CPU power to support the resource demands of different image processing configurations. Each configuration option, such as the chosen algorithm or RAM value, has a cost associated with it that subtracts from the overall budget. A key question design question for the satellite is: what set of hardware and software choices will fit a given budget and maximize the image processing accuracy. Many similar design problems involving the allocation of resources subject to a series of design constraints have been modeled as Multidimensional Multiple-Choice Knapsack Problems (MMKPs) [18, 2, 1]. A standard knapsack problem [19] is defined by a set of items with varying sizes and values. The goal is to find the set of items that fits into a fixed sized knapsack and that simultaneously maximizes the value of the items in the knapsack. An MMKP problem is a variation on a standard knapsack problem where the items are divided into sets and at most one item from each set may be placed into the knapsack. Figure 2 shows an example MMKP problem where two sets contain items of different sizes and values. At most Fig. 2. An Example MMKP Problem one of the items A,B, and C can be put into the knapsack. Likewies, only one of the items D, E, and F can be put into the knapsack. The goal is to find the combination of two items, where one item is chosen from each set, that fits

3 3 into the knapsack and maximizes the overall value. A number of resource related problems have been modeled as MMKP problems where the sets are the points of variability in the design, the items are the options for each point of variability, and the knapsack/item sizes are the resources consumed by different design options [23, 2, 29, 7, 2]. The software and hardware design problems are hard to solve individually. Each design problem consists of a number of design variability points that can be implemented by exactly one design option, such as a specific image processing algorithm. Each design option has an associated resource consumption, such as cost, and value associated with it. Moreover, the design options cannot be arbitrarily chosen because there is a limited amount of each resource available to consume. It is apparent that the description of the software design problem directly parallels the definition of an MMKP problem. An MMKP set can be created for each point of variability (e.g., Image Resolution and Algorithm). Each set can then be populated with the options for its corresponding point of variability (e.g., High, Medium, Low for Image Resolution). The items each have a size (cost) associated with them and there is a limited size knapsack (budget) that the items can fit into. Clearly, just selecting the optimal set of software features subject to a maximum budget is an instance of the NP-Hard [1] MMKP problem. For the overall satellite design problem, we must contend with not one but two individual knapsack problems. One problem models the software design and the second problem models the hardware design. We can model the satellite codesign problem using two MMKP problems. The first of the two MMKP problems for the satellite design is its software MMKP problem. The hardware design options are modeled in a separate MMKP problem with each set containing the potential hardware options. An example mapping of the software and hardware design problems to MMKP problems is shown in Figure 3. We call this combined two problem MMKP model a MMKP co-design problem. With this MMKP co-design model of the satellite, a design is reached by choosing one item from each set (e.g., an Image Resolution, Algorithm, RAM value, and CPU) for each problem. The correctness of the design can be validated by ensuring that exactly one item is chosen from each set and that the items fit into their respective software and hardware knapsacks. This definition, however, is still not sufficient to model the cost-constrained hardware/- software producer/consumer co-design problem since we have not accounted for the constraint on the total size of the two knapsacks or the production and consumption of resources by hardware and software. A correct solution must also uphold the constraint that the items chosen for the solution to the software MMKP problem do not consume more resources, such as RAM, than are produced by the items selected for the solution to the hardware MMKP problem. Moreover, the cost of the entire selection of items must be less than the total development budget. We know that solving the individual MMKP problems for the optimal hardware and software design is NP-Hard but we must also determine how hard solving the combined co-design problem Fig. 3. Modeling the Satellite Design as Two MMKP Problems is. In this simple satellite example, there are 192 possible satellite configurations to consider. For real industrial scale examples, there are a significantly larger number of possibilities. For example, a system with design choices that can be modeled using 64 MMKP sets, each with 2 items, will have 2 64 possible configurations. For systems of this scale, manual solving methods are clearly not feasible, which motives the need for a search-based software engineering technique. 2.1 MMKP Co-design Complexity Below, we show that MMKP co-design problems are NP-Hard and in need of a search-based software engineering technique. We are not aware of any approximation techniques for solving MMKP co-design problems in polynomial time. This lack of approximation algorithms coupled with the poor scalability of exact solving techniques hinders DRE system designers s abilities to optimize software and hardware in tandem. To show that MMKP co-design problems are NP-Hard, we must build a formal definition of them. We can define an MMKP co-design problem, CoP, as an 8-tuple: CoP =< Pr, Co, S 1, S 2, S, R, Uc(x, k), Up(x, k) > where: Pr is the producer MMKP problem (e.g., the hardware choices). Co is the consumer MMKP problem (e.g., the software choices). S 1 is the size of the producer, Pr, knapsack. S 2 is the size of the consumer, Co, knapsack. R is the set of resource types (e.g., RAM, CPU, etc.) that can be produced and consumed by P r and Co, respectively.

4 4 S is the total allowed combined size of the two knapsacks for Pr and Co (e.g., total budget). Uc(x, k) is a function which calculates the amount of the resource k R consumed by an item x Co (e.g., RAM consumed). U p(x, j) is a function which calculates the amount of the the resource k R produced by an item x Pr (e.g., RAM provided). Let a solution to the MMKP co-design problem be defined as a 2-tuple, < p, c >, where p Pr is the set of items chosen from the producer MMKP problem and c Co is the set of items chosen from the consumer MMKP problem. A visualization of a solution tuple is shown in Figure 4. We Fig. 4. Structure of an MMKP Co-design Problem define the value of the solution as the sum of the values of the elements in the consumer solution: j V = valueof(c j ) where j is the total number of items in c, c j is the j th item in c, and valueof() is a function that returns the value of an item in the consumer soution. We require that p and c are valid solutions to Pr and Co, respectively. For p and c to be valid, exactly one item from each set in Pr and Co must have been chosen. Moreover, the items must fit into the knapsacks for Pr and Co. 1 This constraint corresponds to Rule (2) in Figure 4 that each solution must fit into the budget for its respective knapsack. The MMKP co-design problem adds two additional constraints on the solutions p and c. First, we require that the items in c do not consume more of any resource than is produced by the items in p: ( k R), j Uc(c j, k) l Up(p l, k) where j is the total number of items in c, c j is the j th item in c, l is the total number of items in p, and p j is the j th item in p. Visually, this means that the consumer solution can fit 1. A more formal definition of MMKP solution correctness is available from [1]. into the producer solution s resources as shown in Rule (1) in Figure 4. The second constraint on c and p is an interesting twist on traditional MMKP problems. For a MMKP co-design problem, we do not know the exact sizes, S 1, S 2, of each knapsack. Part of the problem is determining the sizes as well as the items for each knapsack. Since we are bound by a total overall budget, we must ensure that the sizes of the knapsacks do not exceed this budget: S 1 + S 2 S This constraint on the overall budget corresponds to Rule (3) in Figure 4. To show that solving for an exact answer to the MMKP problem is NP-Hard, we will show that we can reduce any instance of the NP-complete knapsack decision problem to an instance of the MMKP co-design problem. The knapsack decision problem asks if there is a combination of items with value at least V that can fit into the knapsack without exceeding a cost constraint. A knapsack problem can easily be converted to a MMKP problem as described by Akbar et al. [1]. For each item, a set is created containing the item and the item. The item has no value and does not take up any space. Using this approach, a knapsack decision problem, K dp, can be converted to a MMKP decision problem, M dp, where we ask if there is a selection of items from the sets that has value at least V. To reduce the decision problem to an MMKP co-design problem, we can use the MMKP decision problem as the consumer knapsack (Co = M dp ), set the producer knapsack to an MMKP problem with a single item with zero weight and value ( ), and let our set of produced and consumed resources, R, be empty, R =. Next, we can let the total knapsack size budget be the size of the decision problem s knapsack, S = sizeof(m dp ). The co-design solution, which is the maximization of the consumer knapsack solution value, will also be the optimal answer for the decision problem, M dp. We have thus setup the co-design problem so that it is solving for a maximal answer to M dp without any additional producer/consumer constraints or knapsack size considerations. Since any instance of the NP-complete knapsack decision problem can be reduced to an MMKP co-design problem, the MMKP co-design problem must be NP-Hard. 3 CHALLENGES OF MMKP CO-DESIGN PROBLEMS This section describes two key challenges to building an approximation algorithm to solve MMKP co-design problems. The first challenge is that determining how to set the budget allocations of the software and hardware is not straightforward since it involves figuring out the precise size of the software and hardware knapsacks where the hardware knapsack produces sufficient resources to support the optimal software knapsack solution (which itself is unknown). The second challenge is that the tight-coupling between producer and consumer MMKP problems makes them hard to solve

5 5 individually, thus motivating the need for a heuristic to decouple them. 3.1 Challenge 1: Undefined Producer/Consumer Knapsack Sizes One challenge of the MMKP co-design problem is that the individual knapsack size budget for each of the MMKP problems is not predetermined, i.e., we do not know how much of the budget should be allocated to software versus hardware, as shown in Figure 5. The only constraint is that 3.2 Challenge 2: Tight-coupling Between the Producer/Consumer Another key issue to contend with is how to rank the solutions to the producer MMKP problem. Per the definition of an MMKP co-design problem from Section 2.1, the producer solution does not directly impart any value to the overall solution. The producer s benefit to a solution is its ability to make a good consumer solution viable. MMKP solvers must have a way of ranking solutions and items. The problem, however, is that the value of a producer solution or item cannot be calculated in isolation. A consumer solution must already exist to calculate the value of a particular producer solution. For example, whether or not 1,24 kilobytes of memory are beneficial to the overall solution can only be ascertained by seeing if 1,24 kilobytes of memory are needed by the consumer solution. If the consumer solution does not need this much memory, then the memory produced by the item is not helpful. If the consumer solution is RAM starved, the item is desperately needed. A visualization of the problem is shown in Figure 6. Fig. 5. Undefined Knapsack Sizes the sum of the budgets must be less than or equal to the an overall total budget. Every pair of budget values for hardware and software results in two new unique MMKP problems. Even minor transfers of capital from one problem budget to the other can therefore completely alter the solution of the problem, resulting in a new maximum value. Existing MMKP techniques assume that the exact desired size of the knapsack is known. There is currently no information to aid designers in determining the allocation of the budgets. As a result, many designers may choose the allocation arbitrarily without realizing the profound impact it may have. For example, a budget allocation of 75% software and 25% software may result in a solution that, while valid, provides far less value and costs considerably more than a solution with a budget allocation of 74% and 26% percent. There are, however, trends in the solution optimality that can be determined by solving instances of the problem with unique sequential divisions of the total budget. These trends can give the designer an idea of what budget divisions will result in favorable system designs. This data can also show which budget allocations to avoid. A key challenge is figuring out how to shed light on these nuances in the solution space and present them to designers. In Section 4.3 we discuss ASCENT s solution to this problem and in Section 5 we present empirical data showing how ASCENT allows designers to uncover these trends in a number of MMKP co-design problems. Fig. 6. Producer/Consumer MMKP Tight-coupling The inability to rank producer solutions in isolation of consumer solutions is problematic because it creates a chicken and the egg problem. A valid consumer solution cannot be chosen if we do not know what resources are available for it to consume. At the same time, we cannot rank the value of producer solutions without a consumer solution as a context. This tight-coupling between the producer/consumer is a challenging problem. We discuss the heuristic ASCENT uses to solve this problem in Section THE ASCENT ALGORITHM This section presents our polynomial-time approximation algorithm, called the Allocation-baSed Configuration ExploratioN Technique (ASCENT), for solving MMKP co-design problems. The pseudo-code for the ASCENT algorithm is shown in Figure 7 and explained throughout this section. 4.1 Producer/Consumer Knapsack Sizing The first issue to contend with when solving an MMKP co-design problem is Challenge 2 from Section 3.1, which involves determining how to allocate sizes to the individual knapsacks. ASCENT addresses this problem by dividing the overall knapsack size budget into increments of size D. The

6 6 MMKPProblem ConsumerMMKP MMKPProblem ProducerMMKP int StepSize int ConsumerBudget = int ProducerBudget = 1 int TotalBudget Solution BestSolution Solutions AllSolutions while(consumerbudget <= TotalBudget) (1) IdealizedSolution = solvemmkpcostonly(consumermmkp, (2) ConsumerBudget) double[] Ratios = calculateresourceratios(idealizedsolution) (3) for each Item in ProducerMMKP (4) for i =, i < Ratios.size, i++ Item.Value += Ratios[i] * Item.ProducedResourceValue[i] ProducerBudget = TotalBudget - ConsumerBudget HardwareSolution = (5) solvemmkpcostonly(producermmkp, ProducerBudget) int[] AvailableResources = (6) HardwareSolution.ProducedResourceValues.Sum SoftwareSolution = (7) solvemmkp(producermmkp, AvailableResources, ConsumerBudget) ConsumerBudget += StepSize (8) Solution = Tuple<SoftwareSolution, HardwareSolution> Solutions.add(Solution) (9) if(solution.value > BestSolution.Value) BestSolution = Solution Return BestSolution and Solutions (1) Fig. 7. The ASCENT Algorithm size increment is a parameter provided by the user. ASCENT then iteratively increases the consumer s budget allocation (knapsack size) from % of the total budget to 1% of the total budget in steps of size D. The incremental expansion of the producer s budget can be seen in the while loop in code listing (1) of Figure 7 and the incrementation of ConsumerBudget in code listing (8). For example, if there is a total size budget of 1 and increments of size 1, ASCENT firsts assign to the consumer and 1 to the producer, 1 and 9, 8 and 2, and so forth until 1% of the budget is assigned to the consumer. The allocation process is shown in Figure 8. ASCENT includes 4.2 Ranking Producer Solutions At each allocation iteration, ASCENT has a fixed set of sizes for the two knapsacks. In each iteration, ASCENT must solve the coupling problem presented in Section 3.2, which is: how do we rank producer solutions without a consumer solution. After the coupling is loosened, ASCENT can solve for a highly valued solution that fits the given knapsack size restrictions. To break the tight-coupling between producer and consumer ordering, ASCENT employs a special heuristic. Once the knapsack size allocations are fixed, ASCENT solves for a maximal consumer solution that only considers the current size constraint of its knapsack and not produced/consumed resources. This step is shown in code listing (2) of Figure 7. The method solvemmkpcostonly uses an arbitrary MMKP approximation algorithm to find a solution that only considers the consumer s budget. This approach is similar to asking what would the best possible solution look like if there were unlimited produced/consumed resources. Once ASCENT has this idealized consumer solution, it calculates a metric for assigning a value to producer solutions. The metric that ASCENT uses to assign value to producer items is: how valuable are the resources of a producer item to the idealized consumer solution. This metric is calculated by the calculateresourceratios method call in code listing (3) of Figure 7. We calculate the value of a resource as the amount of the resource consumed by the idealized consumer solution divided by the sum of the total resources consumed by the overall solution: j V r = Uc(c j, k) j Uc(c j, k) k In code listing (4) of Figure 7, the resource ratios (V r values) are known and each item in the producer MMKP problem is assigned a value by multiplying each of its provided resource values by the corresponding ratio and summing these values: k valueof(p l ) = (Up(p l, k) V k ) The overall solving workflow at each budget allocation ratio is shown in Figure 9. Fig. 8. Iteratively Allocating Budget to the Consumer Knapsack both the %,1% and 1%,% budget allocations to handle cases where the optimal configuration includes producer or consumer items with zero cost. Fig. 9. ASCENT Solving Workflow at Each Budget Allocation Step

7 7 4.3 Solving the Individual MMKP Problems Once sizes have been set for each knapsack and the valuation heuristic has been applied to the producer MMKP problem, existing MMKP solving approaches can be applied. First, the producer MMKP problem, with its new item values, is solved for an optimal solution, as shown in code listing (5) of Figure 7. We use the solvemmkpcostonly method to solve the producer problem since it does not consume any resources other than budget. In code listing (6), the consumer MMKP problem is then updated with constraints reflecting the maximum available amount of each resource produced by the solution from the producer MMKP problem. The consumer MMKP problem is then solved for an optimal solution in code listing (7). The producer and consumer solutions are then combined into the 2-tuple, < p, c > and saved in code listing (9). In each iteration, ASCENT assigns sizes to the producer and consumer knapsacks and the solving process is repeated. A collection of the 2-tuple solutions is compiled during the process. The output of ASCENT, returned in code listing (1) of Figure 7, is both the 2-tuple with the greatest value and the collection of 2-tuples. The overall solving approach is shown in Figure 1. Fig. 1. ASCENT Solving Approach The reason that the 2-tuples are saved and returned as part of the output is that they provide valuable information on the trends in the solution space of the co-design problem. Each 2- tuple contains a high-valued solution to the co-design problem at a particular ratio of knapsack sizes. This data can be used to graph and visualize how the overall solution value changes as a function of the ratio of knapsack sizes. As shown in Section 5, this information can be used to ascertain a number of useful solution space characteristics, such as determining how much it costs to increase the value of a specific system property to a given level or finding the design with the highest value per unit of cost. 4.4 Algorithmic Complexity The overall algorithmic complexity of ASCENT can be broken down as follows: 1) there are T iterations of ASCENT 2) in each iteration there are 3 invocations to an MMKP approximation algorithm 3) in each iteration, values of at most n producer items must be updated. This breakdown yields an algorithmic complexity of O(T(n + MMKP)), where MMKP is the algorithmic complexity of the chosen MMKP algorithm. With M-HEU (one of the most accurate MMKP approximation algorithms [1]) the algorithmic complexity is O(mn 2 (l 1) 2 ), where m is the number of resource types, n is the number of sets, and l is maximum items per set. Our experiments in Section 5 used T = 1 and found that it provided excellent results. With our experimental setup that used M-HEU, the overall algorithmic complexity was therefore O(1(mn 2 (l 1) 2 + n)). This algorithmic complexity is polynomial and thus ASCENT should be able to scale up to very large problems, such as the co-design of production satellite hardware and software. 5 ANALYSIS OF EMPIRICAL RESULTS This section presents empirical data we obtained from experiments using ASCENT to solve MMKP co-design problems. The empirical results demonstrate that ASCENT provides near optimal results. The results also show that ASCENT can not only provide near optimal designs for the co-design problems, such as the satellite example, but also scale to the large problem sizes of a production satellite design. Moreover, we show that the data sets generated by ASCENT which contain high valued solutions at each budget allocation can be used to perform a number of important search-based software engineering studies on the co-design solution space. Each experiment used a total of 1 budget iterations (T = 1). We also used the M-HEU MMKP approximation algorithm as our MMKP solver. All experiments were conducted on an Apple Powerbook with a 2.4 GHz Intel Core 2 Duo processor, 2 gigabyes of RAM, running OS X version , and a 1.5 Java Virtual Machine (JVM) run in client mode. The JVM was launched with a maximum heap size of 64mb (-Xmx=64m). 5.1 MMKP Co-design Problem Generation A key capability needed for the experiments was the ability to randomly generate MMKP co-design problems for test data. For each problem, we also needed to calculate how good AS- CENT s solution was as a percentage of the optimal solution: valueof(ascent Solution) valueof(optimalsolution). For small problems with less than 7 sets per MMKP problem, we were able to use a branch-andbound linear programming (LP) [27] technique built on top of the Java Choco constraint solver (choco-solver.net) to derive the optimal solution. For larger scale problems the LP technique was simply not feasible, e.g., solutions might take years to find. For larger problems, we developed a technique that randomly generated MMKP co-design problems with a few carefully crafted constraints so we knew the exact optimal answer. Others [1] have used this general approach, though with a different problem generation technique. Ideally, we would prefer to generate completely random problems to test ASCENT. We our confident in the validity of this technique, however, for two reasons: (1) the trends we observed from smaller problems with truly random data were identical to those we saw in the data obtained from solving the

8 8 generated problems and (2) the generated problems randomly placed the optimal items and randomly assigned their value and size so that the problems did not have a structure clearly amenable to the heuristics used by our MMKP approximation algorithm. We did not use Akbar s technique [1] because the problems it generated were susceptible to a greedy strategy. Our problem generation technique worked by creating two MMKP problems for which we knew the exact optimal answer. First, we will discuss how we generated the individual MMKP problems. Let S be the set of MMKP sets for the problem, R be a K-dimensional vector describing the size of the knapsack, I ij be the j th item of the i th set, size(i ij, k) be the k th component of I ij s size vector Sz ij, and size(s, k) be the k th component of the knapsack size vector, the problem generation technique for each MMKP problem worked as follows: 1) Randomly populate each set, s S, with a number of items 2) Generate a random size, R, for the knapsack 3) Randomly choose one item, Iopt i OptItems from each set to be the optimal item. Iopt i is the optimal item in the i th set. 4) Set the sizes of the items in OptItems, so that when added together they exactly consume all of the space in the knapsack: ( k R), ( i size(iopt i, k)) = size(s, k) 5) Randomly generate a value, V opt i, for the optimal item, Iopt i, in each set 6) Randomly generate a value delta variable, V d < min(v opt i ), where min(v opt i ) is the optimal item with the smallest value 7) Randomly set the size and values of the remaining nonoptimal items in the sets so that either: The item has a greater value than the optimal item in its set. In this case, each component of the item s size vector, is greater than the corresponding component in the optimal item s size vector: ( k R), size(iopt i, k) < size(i ij, k) The item has a smaller value than the optimal item s value minus V d, valueof(i ij ) < V opt i V d. This constraint will be important in the next step. In this case, each component of the item s size vector is randomly generated. At this point, we have a very random MMKP problem. What we have to do is further constrain the problem so that we can guarantee the items in OptItems are truly the optimal selection of items. Let MaxV i be the item with the highest value in the i th set. We further constrain the problem as follows: For each item MaxV i, we reset the values of the items (if needed) to ensure that the sum of the differences between the max valued items in each set and the optimal item are less than V d : i (MaxV i V opt i ) < V d A visualization of this constraint is shown in Figure 11. Fig. 11. A Visualization of V d This new valuation of the items guarantees that the items in OptItems are the optimal items. We can prove this property by showing that if it does not hold, there is a contradiction. Assume that there is some set of items, Ibetter, that fit into the knapsack and have a higher value. Let V b i be the value of the better item to choose than the optimal item in the i th set. The sum of the values of the better items from each set must have a higher value than the optimal items. The items Ib i Ibetter must fit into the knapsack. We designed the problem so that the optimal items exactly fit into the knapsack and that any item with a higher value than an optimal item is also bigger. This design implies that at least one of the items in Ibetter is smaller and thus also has a smaller value, V small, than the optimal item in its set (or Ibetter wouldn t fit). If there are Q sets in the MMKP problem, this implies that at most Q 1 items in Ibetter have a larger value than the optimal item in their set, and thus: Q 1 Q 1 V opt Q + V opt i < V small + V b i We explicitly revalued the items so that: i (MaxV i V opt i ) < V d By subtracting the Q 1 V opt i from both sides, we get: Q 1 V opt Q < V small + (V b i V opt i ) the inequality will still hold if we substitute V d Q 1 (V b i V opt i ), because V d is larger: V opt Q < V small + V d V opt Q V d < V small in for which is a contradicton of the rule that we enforced for smaller items: valueof(i ij ) < V opt i V d This problem generation technique creates MMKP problems with some important properties. First, the optimal item in each set will have a random number of larger and smaller valued items (or none) in its set. This property guarantees that a greedy strategy will not necessarily do well on the problems. Moreover, the optimal item may not have the best ratio of value/size. For example, an item valued slightly smaller than

9 9 the optimal item may consume significantly less space because its size was randomly generated. Many MMKP approximation algorithms use the value/size heuristic to choose items. Since there is no guarantee on how good the value/size of the optimal item is, MMKP approximation algorithms will not automatically do well on these problems. To create an MMKP co-design problem where we know the optimal answer, we generate a single MMKP problem with a known optimal answer and split it into two MMKP problems to create the producer and consumer MMKP problems. To split the problem, two new MMKP problems are created. One MMKP problem receives E of the sets from the original problem and the other problem receives the remaining sets. The total knapsack size for each problem is set to exactly the size required by the optimal items from its sets to fit. The sum of the two knapsack sizes will equal the original knapsack size. Since the overall knapsack size budget does not change, the original optimal items remain the overall optimal solution. Next, we generate a set of produced/consumed resource values for the two MMKP problems. For the consumer problem, we randomly assign each item an amount of each produced resource k R that the item consumes. Let TotalC(k) be the total amount of the resource k needed by the optimal consumer solution and V opt(p) be the optimal value for the producer MMKP problem. We take the consumer problem and calculate a resource production ratio, Rp(k), where Rp(k) = TotalC(k) V opt(p) For each item, I ij, in the producer problem, we assign it a production value for the resource k of: Produced(k) = Rp(k) valueof(i ij ). The optimal items have the highest feasible total value based on the given budget and the sum of their values times the resource production ratios exactly equals the needed value of each resource k: TotalC(k) = TotalC(k) V opt(p) i V opt i Any other set of items must have a smaller total value and consequently not provide sufficient resources for the optimal set of consumer items. To complete the co-design problem, we set the total knapsack size budget to the sum of the sizes of the two individual knapsacks. 5.2 ASCENT Scalability and Optimality Experiment 1: Comparing ASCENT scalability to an exact technique. When designing a satellite it is critical that designers can gauge the accuracy of their design techniques. Moreover, designers of a complicated satellite system need to know how different design techniques scale and which technique to use for a given problem size. This first set of experiments evalutes these questions for ASCENT and a branchand-bound linear programming (LP) co-design technique. Although LP solvers can find optimal solutions to MMKP co-design problems they have exponential time complexity. For large-scale co-design problems (such as designing a complicated climate monitoring satellite) LP solvers thus quickly become incapable of finding a solution in a reasonable time frame. We setup an experiment to compare the scalability of ASCENT to an LP technique. We randomly generated a series of problems ranging in size from 1 to 7 sets per hardware and software MMKP problem. Each set had 1 items. We tracked and compared the solving time for ASCENT and the LP technique as the number of sets grew. Figure 12 presents the results from the experiment. As shown by the Fig. 12. Solving Time for ASCENT vs. LP results, ASCENT scales significantly better than an LP-based approach. Experiment 2: Testing ASCENT s solution optimality. Clearly, scalability alone is not the only characteristic of a good approximation algorithm. A good approximation algorithm must also provides very optimal results. We created an experiment to test the accuracy of ASCENT s solutions. We compared the value of ASCENT s answer to the optimal answer, valueof(ascen T Solution) valueof(optimalsolution) for 5 different MMKP co-design problem sizes with 3 items per set. For each size co-design problem, we solved 5 different problem instances and averaged the results. It is often suggested, due to the Central Limit Theorem [17], to use a sample size of 3 or larger to produce an approximately normal data distribution [13]. We chose a sample size of 5 to remain well above this recommended minimum sample size. The largest problems, with 5 sets per MMKP problem, would be the equivalent of a satellite with 5 points of software variability and an additional 5 points of hardware variability. For problems with less than 7 sets per MMKP problem, we compared against the optimal answer produced with an LP solver. We chose a low number of items per set to decrease the time required by the LP solver and make the experiment feasible. For problems with more than 7 sets, which could not be solved in a timely manner with the LP technique, we used our co-design problem generation technique presented in Section 5.1. The problem generation technique allowed us to create random MMKP co-design problems that we knew the exact optimal answer for and could compare against ASCENT s answer. Figure 13 shows the results of the experiment to test ASCENT s solution value verusus the optimal value over 5 MMKP co-design problem sizes. With 5 sets, ASCENT

10 1 5.3 Solution Space Snapshot Resolution Fig. 13. Solution Optimality vs Number of Sets produces answers that average 9% optimal. With 7 sets, the answers average 95% optimal. Beyond 2 sets, the average optimality is 98% and continues to improve. These results are similar to MMKP approximation algorithms, such as M- HEU, that also improve with increasing numbers of sets [1]. We also found that increasing the number of items per set also increased the optimality, which parallels the results for our solver M-HEU [1]. Experiment 3: Measuring ASCENT s solution space snapshot accuracy. As part of the solving process, ASCENT not only returns the optimal valued solution for a co-design problem but it also produces a data set to graph the optimal answer at each budget allocation. For the satellite example, the graph would show designers the design with the highest image processing accuracy for each ratio of budget allocation to software and hardware. We created an experiment to test how optimal each data point in this graph was. For this experiment, we generated 1 co-design problems with less than 7 sets per MMKP problem and compared ASCENT s answer at each budget allocation to the optimal answer derived using an LP technique (more sets improves ASCENT s accuracy). For problems with 7 sets divided into 98 different budget allocations, ASCENT finds the same, optimal solution as the LP solver more than 85% of the time. Figure 14 shows an example that compares the solution space graph produced by ASCENT to a solution space graph produced with an LP technique. The X-axis shows the percentage of the Fig. 14. Solution Value vs. Budget Allocation budget allocated to the software (consumer) MMKP problem. The Y-axis shows the total value of the MMKP co-design problem solution. The ASCENT solution space graph closely matches the actual solution space graph produced with the LP technique. Experiment 4: Demonstrating the importance of solution space snapshot resolution. A complicated challenge of applying search-based software engineering to hardware/software co-design problems is that design decisions are rarely as straightforward as identifying the design configuration that maximizes a specific property. For example, if one satellite configuration provides 98% of the accuracy of the most optimal configuration for 5% less cost, designers are likely to choose it. If designers have extensive experience in hardware development, they may favor a solution that is marginally more expensive but allocates more of the development to hardware, which they know well. Search-based software engineering techniques should therefore allow designers to iteratively tease these desired designs out of the solution space. ASCENT has a number of capabilities beyond simply finding the optimal solution for a problem to help designers find desirable solutions. First, as we describe below, ASCENT can be adjusted to produce different resolution images of the solution space by adjusting the granularity of the budget allocation steps (e.g., make smaller and more allocation changes). ASCENT s other solution space analysis capabilities are presented in Section 5.4. The granularity of the step size greatly impacts the resolution or detail that can be seen in the solution space. To obtain the most accurate and informative solution space image, a small step size should be used. Figure 15(a) shows a solution space graph generated through ASCENT using 1 allocation steps. The X-axis is the percentage of budget allocated to software, the Y-axis is the total value of the solution. It appears that any allocation of 3% or more of the budget to software will produce a satellite with optimal image processing accuracy. Figure 15(b), however, shows the graph that results from solving the same problem with a 2 allocation steps. It is important to note that while allocating 3% or more of the budget to software still results in an optimal solution, there is another point that was absent from the previous graph. It can clearly be seen that an allocation of 15% of the budget for software will also result in a near optimal solution, which is an unanticipated good solution that favors hardware. The importance of a small step size is further demonstrated in Figure 15(c), which was produced with 1 allocation steps. Both previous graphs also suggest that any allocation of greater than 3% for software would result in an optimal satellite design. Figure 15(c) shows that there are many pitfalls in the 7% to 99% range that must be avoided. At these precise budget allocation points, there is not a good combination of hardware and software that will produce a good solution. This result may seem counter-intuitive. At these points, the previous good hardware solution is too expensive, but a different more expensive software configuration with less resource consumption to fit on the cheaper available hardware configurations is also not within budget. If any of these software allocation percentages were chosen arbitrarily without creating a high quality graph of the solution space, the designer could unknowingly create a system that has 25% of the value

11 11 threshold. The resulting graph allows designers to find the lowest cost satellite co-design solution with a given image processing accuracy. (a) Low Resolution Solution Space Snapshot (a) Solution Cost vs. Budget Allocation (b) Medium Resolution Solution Space Snapshot (b) Solution Value vs. Budget Allocation (c) High Resolution Solution Space Snapshot Fig. 15. A Solution Space Graph at Varying Resolutions for the same cost. (c) Cost Effectiveness vs. Budget Allocation 5.4 Solution Space Analysis with ASCENT Although ASCENT s ability to provide variable resolution solution space images is important, its greatest value stems from the variety of questions that can be answered from its output data. In the following results, we present representative solution space analyses that can be performed with ASCENT s output data. Design analysis 1: Identifying low-cost viable designs. A common software engineering scenario is that a design need not necessarily be optimal as long as it provides a minimum required value or capability. For example, satellite designers want to find the cheapest designs that provide the required level of image processing accuracy. Figure 16(a) shows a graph that can be produced by taking the output data from ASCENT and graphing total actual solution cost as a function of budget allocation, rather than graphing value as a function of budget allocation. This graph allows designers to ascertain key low cost designs in the solution space and can be further filtered to eliminate any solutions that do not meet a minimum value (d) Budget Surplus vs. Budget Allocation Fig. 16. Satellite Design Solution Space Analysis Graphs Design analysis 2: Determining budget allocation ratios. An important question to ask when designing a system is what budget allocations and solutions give the most value per unit of cost. In terms of the satellite example, the question would be what design gives the most accuracy for the money. Figure 16(c) shows another set of ASCENT output data that has been regraphed to show value cost as a function of the

12 12 percentage of the budget allocated to software. It can clearly be seen that the designs with the best ratio of value to cost assign more of the value to software. This graph can also easily be filtered to eliminate designs that do not provide a minimum level of value. Design analysis 3: Finding designs that produce budget surpluses. Designers may wish to know how the resource slack values, such as how much RAM is unused, with different satellite designs. Another related question is how much of the budget will be left-over for designs that provides a specified minimal level of image processing accuracy. We can use the same ASCENT output data to graph the budget surplus at a range of allocation values. Figure 16(d) shows the budget surplus from choosing various designs. The graph has been filtered to adhere to a requirement that the solution provide a value of at least 16. Any data point with a value of less than 16 has had its surplus set to. Looking at the graph, we can see that the cheapest design that provides a value of at least 1,6 is found with a budget allocation of 8% software and 2% hardware. This design has a value of 1,6 and produces budget savings of 37%. Design analysis 4: Evaluating design upgrade/- downgrade cost. In some situations, designers may have a given solution and want to know how much it will cost or save to upgrade or downgrade the solution to a different image processing accuracy. For example, designers may be asked to provide a range of satellite options for their superiors that show what level of image processing accuracy they can provide at a number of price points. Figure 17 depicts another view of the ASCENT data that shows how cost varies in relation to the minimum required solution value. This graph shows that 5 cost ASCENT Produces Answers that are 98% Optimal: As seen from the results in Figure 13, ASCENT generates answers that average 98% optimal for problems with a large number of sets in each MMKP problem. This result implies that ASCENT will perform well on large-scale MMKP codesign problems, such as the design of a large and complex satellite. Moreover, the larger the problem, the more accurate ASCENT s results. Systems of this scale would be nearly impossible to optimize without the search-based software engineering method provided by ASCENT. High Resolution Solution Space Snapshots Can Identify Near-optimal Alternative Solutions: Another important result is that we demonstrated that by capturing a high resolution solution space snapshot we can identify unanticipated near optimal designs. These unanticipated nearly optimal designs correspond to peaks in the solution space graph at local maxima. In future work, we plan to develop algorithms that automatically increase the solution space snapshot resolution at and around these local maxima. Solving the large numbers of problems to produce a highly detailed solution space snapshot is too time-consuming and error-prone to perform manually. ASCENT Output Data Can Answer Numerous Costbased Design Questions to Iteratively Improve Solution Design: Since many design criteria cannot be completely formalized for a search solver, search-based software engineering should allow desigeners to iteratively hone in on the solutions they desire. The results demonstrated that each run of ASCENT allowed designers to answer key questions related to the allocation of budget to hardware and software. For example, designers of a satellite could answer questions such as what allocation of budget to hardware and software produces the highest valued solution. Designers can also answer other previously difficult questions related to how expensive it is to produce a solution with a given optimality. 6 RELATED WORK Fig. 17. Cost of Increasing Solution Value units can finance a design with a value up to 9, but a design of a value of 1, units will cost at least 124 cost units. This information graph demonstrates the increased financial burden of requiring a slightly higher valued design. Alternatively, if the necessary value of the system is near the left edge of one of these plateaus, designers can make an informed decision on whether the increased value justifies the significantly increased cost. 5.5 Summary of Empirical Results The following is a summary of the empirical results presented above. Search-based software engineering has a large number of facets ranging from the design of general approximation algorithms to the construction of search-based software engineering methods for specific problems. This section compares and contrasts ASCENT to search-based software engineering techniques related to (1) approximation algorithms for solving similar problems to the MMKP co-design problem, (2) methods for using search-based techniques to solve hardware/software partitioning problems, (3) methods for using approximation techniques for solving hardware/software scheduling problems, and (4) search-based software engineering techniques for determining project staffing. MMKP approximation. Many problems similar to the hardware/software co-design problem presented in this paper have been solved with MMKP techniques. In multimedia systems, determining the quality settings that maximize the value of a series of data streams has been modeled as an MMKP problem [23, 2]. Other usages of MMKP include meta-scheduling for grid applications [28], optimally selecting

ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem

ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem 1 ASCENT: An Algorithmic Technique for Designing Hardware and Software in Tandem Jules White, Brian Doughtery, and Douglas C. Schmidt Vanderbilt University, EECS Department, Nashville, TN, USA Email:{jules,

More information

Filtered Cartesian Flattening: An Approximation Technique for Optimally Selecting Features while Adhering to Resource Constraints

Filtered Cartesian Flattening: An Approximation Technique for Optimally Selecting Features while Adhering to Resource Constraints Filtered Cartesian Flattening: An Approximation Technique for Optimally Selecting Features while Adhering to Resource Constraints J. White and D. C. Schmidt Vanderbilt University, EECS Department Nashville,

More information

Selecting Highly Optimal Architectural Feature Sets with Filtered Cartesian Flattening

Selecting Highly Optimal Architectural Feature Sets with Filtered Cartesian Flattening Selecting Highly Optimal Architectural Feature Sets with Filtered Cartesian Flattening Jules White, Brian Dougherty, and Douglas C. Schmidt Department of Electrical Engineering and Computer Science, Vanderbilt

More information

ScatterD: Spatial Deployment Optimization with Hybrid Heuristic / Evolutionary Algorithms

ScatterD: Spatial Deployment Optimization with Hybrid Heuristic / Evolutionary Algorithms ScatterD: Spatial Deployment Optimization with Hybrid Heuristic / Evolutionary Algorithms Jules White, Brian Dougherty, Chris Thompson, and Douglas C. Schmidt Department of Electrical Engineering and Computer

More information

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 2 Part 4: Dividing Networks into Clusters The problem l Graph partitioning

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT

SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT QlikView Technical Brief February 2012 qlikview.com Introduction When it comes to the enterprise Business Discovery environments, the ability of the

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management WHITE PAPER: ENTERPRISE AVAILABILITY Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management White Paper: Enterprise Availability Introduction to Adaptive

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Subhash Suri June 5, 2018 1 Figure of Merit: Performance Ratio Suppose we are working on an optimization problem in which each potential solution has a positive cost, and we want

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible

More information

Configuration and Deployment Derivation Strategies for Distributed Real-time and Embedded Systems

Configuration and Deployment Derivation Strategies for Distributed Real-time and Embedded Systems Configuration and Deployment Derivation Strategies for Distributed Real-time and Embedded Systems Brian Dougherty briand@dre.vanderbilt.edu June 2, 2010 Contents 1 Introduction 1 1.1 Overview of Research

More information

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments WHITE PAPER Application Performance Management The Case for Adaptive Instrumentation in J2EE Environments Why Adaptive Instrumentation?... 3 Discovering Performance Problems... 3 The adaptive approach...

More information

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A

More information

Applying Model Intelligence Frameworks for Deployment Problem in Real-Time and Embedded Systems

Applying Model Intelligence Frameworks for Deployment Problem in Real-Time and Embedded Systems Applying Model Intelligence Frameworks for Deployment Problem in Real-Time and Embedded Systems Andrey Nechypurenko 1, Egon Wuchner 1, Jules White 2, and Douglas C. Schmidt 2 1 Siemens AG, Corporate Technology

More information

IV. Special Linear Programming Models

IV. Special Linear Programming Models IV. Special Linear Programming Models Some types of LP problems have a special structure and occur so frequently that we consider them separately. A. The Transportation Problem - Transportation Model -

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Polynomial-Time Approximation Algorithms

Polynomial-Time Approximation Algorithms 6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

CS/ECE 566 Lab 1 Vitaly Parnas Fall 2015 October 9, 2015

CS/ECE 566 Lab 1 Vitaly Parnas Fall 2015 October 9, 2015 CS/ECE 566 Lab 1 Vitaly Parnas Fall 2015 October 9, 2015 Contents 1 Overview 3 2 Formulations 4 2.1 Scatter-Reduce....................................... 4 2.2 Scatter-Gather.......................................

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Linear Programming. them such that they

Linear Programming. them such that they Linear Programming l Another "Sledgehammer" in our toolkit l Many problems fit into the Linear Programming approach l These are optimization tasks where both the constraints and the objective are linear

More information

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm

Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 1 Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm Jifeng Xuan, He Jiang, Member, IEEE, Zhilei Ren, and Zhongxuan

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Hashing. Hashing Procedures

Hashing. Hashing Procedures Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

More information

Branch-and-Bound Algorithms for Constrained Paths and Path Pairs and Their Application to Transparent WDM Networks

Branch-and-Bound Algorithms for Constrained Paths and Path Pairs and Their Application to Transparent WDM Networks Branch-and-Bound Algorithms for Constrained Paths and Path Pairs and Their Application to Transparent WDM Networks Franz Rambach Student of the TUM Telephone: 0049 89 12308564 Email: rambach@in.tum.de

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

MDA-based Configuration of Distributed Real-time and Embedded Systems. Abstract. 1. Introduction

MDA-based Configuration of Distributed Real-time and Embedded Systems. Abstract. 1. Introduction MDA-based Configuration of Distributed Real-time and Embedded Systems Brian Dougherty, Jules White, and Douglas C. Schmidt Institute for Software Integrated Systems Vanderbilt University, Nashville, TN

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB

Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB Frommelt Thomas* and Gutser Raphael SGL Carbon GmbH *Corresponding author: Werner-von-Siemens Straße 18, 86405 Meitingen,

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1]

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Marc André Tanner May 30, 2014 Abstract This report contains two main sections: In section 1 the cache-oblivious computational

More information

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s Semi-Independent Partitioning: A Method for Bounding the Solution to COP s David Larkin University of California, Irvine Abstract. In this paper we introduce a new method for bounding the solution to constraint

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Hoard: A Fast, Scalable, and Memory-Efficient Allocator for Shared-Memory Multiprocessors

Hoard: A Fast, Scalable, and Memory-Efficient Allocator for Shared-Memory Multiprocessors Hoard: A Fast, Scalable, and Memory-Efficient Allocator for Shared-Memory Multiprocessors Emery D. Berger Robert D. Blumofe femery,rdbg@cs.utexas.edu Department of Computer Sciences The University of Texas

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER Hardware Sizing Using Amazon EC2 A QlikView Scalability Center Technical White Paper June 2013 qlikview.com Table of Contents Executive Summary 3 A Challenge

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 Reading: Section 9.2 of DPV. Section 11.3 of KT presents a different approximation algorithm for Vertex Cover. Coping

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Chapter 16 Heuristic Search

Chapter 16 Heuristic Search Chapter 16 Heuristic Search Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A Virtual Laboratory for Study of Algorithms

A Virtual Laboratory for Study of Algorithms A Virtual Laboratory for Study of Algorithms Thomas E. O'Neil and Scott Kerlin Computer Science Department University of North Dakota Grand Forks, ND 58202-9015 oneil@cs.und.edu Abstract Empirical studies

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

Analysis and Experimental Study of Heuristics for Job Scheduling Reoptimization Problems

Analysis and Experimental Study of Heuristics for Job Scheduling Reoptimization Problems Analysis and Experimental Study of Heuristics for Job Scheduling Reoptimization Problems Elad Iwanir and Tami Tamir Abstract Many real-life applications involve systems that change dynamically over time.

More information

Connections in Networks: A Hybrid Approach

Connections in Networks: A Hybrid Approach Connections in Networks: A Hybrid Approach Carla P. Gomes 1 Willem-Jan van Hoeve 2 Ashish Sabharwal 1 1 Department of Computer Science, Cornell University, Ithaca NY 14853, U.S.A. {gomes,sabhar}@cs.cornell.edu

More information

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING Chapter 3 EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING 3.1 INTRODUCTION Generally web pages are retrieved with the help of search engines which deploy crawlers for downloading purpose. Given a query,

More information

Categorizing Migrations

Categorizing Migrations What to Migrate? Categorizing Migrations A version control repository contains two distinct types of data. The first type of data is the actual content of the directories and files themselves which are

More information

c. Show that each unit fraction, with, can be written as a sum of two or more different unit fractions.

c. Show that each unit fraction, with, can be written as a sum of two or more different unit fractions. Illustrative Mathematics A-APR Egyptian Fractions II Alignments to Content Standards Alignment: A-APR.D.6 Tags This task is not yet tagged. Ancient Egyptians used unit fractions, such as and, to represent

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Information Systems and Machine Learning Lab (ISMLL) Tomáš Horváth 10 rd November, 2010 Informed Search and Exploration Example (again) Informed strategy we use a problem-specific

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

Multi-Way Number Partitioning

Multi-Way Number Partitioning Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Multi-Way Number Partitioning Richard E. Korf Computer Science Department University of California,

More information

A Fuzzy Logic Approach to Assembly Line Balancing

A Fuzzy Logic Approach to Assembly Line Balancing Mathware & Soft Computing 12 (2005), 57-74 A Fuzzy Logic Approach to Assembly Line Balancing D.J. Fonseca 1, C.L. Guest 1, M. Elam 1, and C.L. Karr 2 1 Department of Industrial Engineering 2 Department

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem

Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem Project Report Sandeep Kumar Ragila Rochester Institute of Technology sr5626@rit.edu Santosh Vodela Rochester Institute of Technology pv8395@rit.edu

More information

Linear programming II João Carlos Lourenço

Linear programming II João Carlos Lourenço Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,

More information

Hardware-Software Codesign

Hardware-Software Codesign Hardware-Software Codesign 4. System Partitioning Lothar Thiele 4-1 System Design specification system synthesis estimation SW-compilation intellectual prop. code instruction set HW-synthesis intellectual

More information

Greedy Approximations

Greedy Approximations CS 787: Advanced Algorithms Instructor: Dieter van Melkebeek Greedy Approximations Approximation algorithms give a solution to a problem in polynomial time, at most a given factor away from the correct

More information

Welfare Navigation Using Genetic Algorithm

Welfare Navigation Using Genetic Algorithm Welfare Navigation Using Genetic Algorithm David Erukhimovich and Yoel Zeldes Hebrew University of Jerusalem AI course final project Abstract Using standard navigation algorithms and applications (such

More information

Approximation Algorithms

Approximation Algorithms Chapter 8 Approximation Algorithms Algorithm Theory WS 2016/17 Fabian Kuhn Approximation Algorithms Optimization appears everywhere in computer science We have seen many examples, e.g.: scheduling jobs

More information

α Coverage to Extend Network Lifetime on Wireless Sensor Networks

α Coverage to Extend Network Lifetime on Wireless Sensor Networks Noname manuscript No. (will be inserted by the editor) α Coverage to Extend Network Lifetime on Wireless Sensor Networks Monica Gentili Andrea Raiconi Received: date / Accepted: date Abstract An important

More information

Solving NP-hard Problems on Special Instances

Solving NP-hard Problems on Special Instances Solving NP-hard Problems on Special Instances Solve it in poly- time I can t You can assume the input is xxxxx No Problem, here is a poly-time algorithm 1 Solving NP-hard Problems on Special Instances

More information

Cache-Oblivious Traversals of an Array s Pairs

Cache-Oblivious Traversals of an Array s Pairs Cache-Oblivious Traversals of an Array s Pairs Tobias Johnson May 7, 2007 Abstract Cache-obliviousness is a concept first introduced by Frigo et al. in [1]. We follow their model and develop a cache-oblivious

More information

CaDAnCE: A Criticality-aware Deployment And Configuration Engine

CaDAnCE: A Criticality-aware Deployment And Configuration Engine CaDAnCE: A Criticality-aware Deployment And Configuration Engine Gan Deng, Douglas C. Schmidt, Aniruddha Gokhale Dept. of EECS, Vanderbilt University, Nashville, TN {dengg,schmidt,gokhale}@dre.vanderbilt.edu

More information

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search Informal goal Clustering Given set of objects and measure of similarity between them, group similar objects together What mean by similar? What is good grouping? Computation time / quality tradeoff 1 2

More information

Splitter Placement in All-Optical WDM Networks

Splitter Placement in All-Optical WDM Networks plitter Placement in All-Optical WDM Networks Hwa-Chun Lin Department of Computer cience National Tsing Hua University Hsinchu 3003, TAIWAN heng-wei Wang Institute of Communications Engineering National

More information

Algorithms in Systems Engineering IE172. Midterm Review. Dr. Ted Ralphs

Algorithms in Systems Engineering IE172. Midterm Review. Dr. Ted Ralphs Algorithms in Systems Engineering IE172 Midterm Review Dr. Ted Ralphs IE172 Midterm Review 1 Textbook Sections Covered on Midterm Chapters 1-5 IE172 Review: Algorithms and Programming 2 Introduction to

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

Graph Structure Over Time

Graph Structure Over Time Graph Structure Over Time Observing how time alters the structure of the IEEE data set Priti Kumar Computer Science Rensselaer Polytechnic Institute Troy, NY Kumarp3@rpi.edu Abstract This paper examines

More information

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017 COMPSCI 632: Approximation Algorithms August 30, 2017 Lecturer: Debmalya Panigrahi Lecture 2 Scribe: Nat Kell 1 Introduction In this lecture, we examine a variety of problems for which we give greedy approximation

More information

Co-synthesis and Accelerator based Embedded System Design

Co-synthesis and Accelerator based Embedded System Design Co-synthesis and Accelerator based Embedded System Design COE838: Embedded Computer System http://www.ee.ryerson.ca/~courses/coe838/ Dr. Gul N. Khan http://www.ee.ryerson.ca/~gnkhan Electrical and Computer

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Supplementary Materials for DVQA: Understanding Data Visualizations via Question Answering

Supplementary Materials for DVQA: Understanding Data Visualizations via Question Answering Supplementary Materials for DVQA: Understanding Data Visualizations via Question Answering Kushal Kafle 1, Brian Price 2 Scott Cohen 2 Christopher Kanan 1 1 Rochester Institute of Technology 2 Adobe Research

More information

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM 9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM Whereas the simplex method is effective for solving linear programs, there is no single technique for solving integer programs. Instead, a

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems

Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems Worst-Case Utilization Bound for EDF Scheduling on Real-Time Multiprocessor Systems J.M. López, M. García, J.L. Díaz, D.F. García University of Oviedo Department of Computer Science Campus de Viesques,

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Storage Allocation Based on Client Preferences

Storage Allocation Based on Client Preferences The Interdisciplinary Center School of Computer Science Herzliya, Israel Storage Allocation Based on Client Preferences by Benny Vaksendiser Prepared under the supervision of Dr. Tami Tamir Abstract Video

More information

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502) Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 4 Homework Problems Problem

More information

Greedy Algorithms. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms

Greedy Algorithms. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms Greedy Algorithms A greedy algorithm is one where you take the step that seems the best at the time while executing the algorithm. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms Coin

More information

31.6 Powers of an element

31.6 Powers of an element 31.6 Powers of an element Just as we often consider the multiples of a given element, modulo, we consider the sequence of powers of, modulo, where :,,,,. modulo Indexing from 0, the 0th value in this sequence

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Spring 2019 Alexis Maciel Department of Computer Science Clarkson University Copyright c 2019 Alexis Maciel ii Contents 1 Analysis of Algorithms 1 1.1 Introduction.................................

More information

Algorithms for Storage Allocation Based on Client Preferences

Algorithms for Storage Allocation Based on Client Preferences Algorithms for Storage Allocation Based on Client Preferences Tami Tamir Benny Vaksendiser Abstract We consider a packing problem arising in storage management of Video on Demand (VoD) systems. The system

More information