CHAPTER 10: ALGORITHM DESIGN TECHNIQUES

Size: px
Start display at page:

Download "CHAPTER 10: ALGORITHM DESIGN TECHNIQUES"

Transcription

1 CHAPTER 10: ALGORITHM DESIGN TECHNIQUES So far, we have been concerned wth the effcent mplementaton of algorthms. We have seen that when an algorthm s gven, the actual data structures need not be specfed. It s up to the programmer to choose the approrate data structure n order to make the runnng tme as small as possble. In ths chapter, we swtch our attenton from the mplementaton of algorthms to the desgn of algorthms. Most of the algorthms that we have seen so far are straghtforward and smple. Chapter 9 contans some algorthms that are much more subtle, and some requre an argument (n some cases lengthy) to show that they are ndeed correct. In ths chapter, we wll focus on fve of the common types of algorthms used to solve problems. For many problems, t s qute lkely that at least one of these methods wll work. Specfcally, for each type of algorthm we wll See the general approach. Look at several examples (the exercses at the end of the chapter provde many more examples). Dscuss, n general terms, the tme and space complexty, where approprate Greedy Algorthms The frst type of algorthm we wll examne s the greedy algorthm. We have already seen three greedy algorthms n Chapter 9: Djkstra's, Prm's, and Kruskal's algorthms. Greedy algorthms work n phases. In each phase, a decson s made that appears to be good, wthout regard for future consequences. Generally, ths means that some local optmum s chosen. Ths "take what you can get now" strategy s the source of the name for ths class of algorthms. When the algorthm termnates, we hope that the local optmum s equal to the global optmum. If ths s the case, then the algorthm s correct; otherwse, the algorthm has produced a suboptmal soluton. If the absolute best answer s not requred, then smple greedy algorthms are sometmes used to generate approxmate answers, rather than usng the more complcated algorthms generally requred to generate an exact answer.

2 There are several real-lfe examples of greedy algorthms. The most obvous s the con-changng problem. To make change n U.S. currency, we repeatedly dspense the largest denomnaton. Thus, to gve out seventeen dollars and sxty-one cents n change, we gve out a ten-dollar bll, a fve-dollar bll, two one-dollar blls, two quarters, one dme, and one penny. By dong ths, we are guaranteed to mnmze the number of blls and cons. Ths algorthm does not work n all monetary systems, but fortunately, we can prove that t does work n the Amercan monetary system. Indeed, t works even f two-dollar blls and ffty-cent peces are allowed. Traffc problems provde an example where makng locally optmal choces does not always work. For example, durng certan rush hour tmes n Mam, t s best to stay off the prme streets even f they look empty, because traffc wll come to a standstll a mle down the road, and you wll be stuck. Even more shockng, t s better n some cases to make a temporary detour n the drecton opposte your destnaton n order to avod all traffc bottlenecks. In the remander of ths secton, we wll look at several applcatons that use greedy algorthms. The frst applcaton s a smple schedulng problem. Vrtually all schedulng problems are ether NP-complete (or of smlar dffcult complexty) or are solvable by a greedy algorthm. The second applcaton deals wth fle compresson and s one of the earlest results n computer scence. Fnally, we wll look at an example of a greedy approxmaton algorthm A Smple Schedulng Problem We are gven jobs j 1, j 2,..., j n, all wth known runnng tmes t 1, t 2,..., t n, respectvely. We have a sngle processor. What s the best way to schedule these jobs n order to mnmze the average completon tme? In ths entre secton, we wll assume nonpreemptve schedulng: Once a job s started, t must run to completon. As an example, suppose we have the four jobs and assocated runnng tmes shown n Fgure One possble schedule s shown n Fgure Because j 1 fnshes n 15 (tme unts), j 2 n 23, j 3 n 26, and j 4 n 36, the average completon tme s 25. A better schedule, whch yelds a mean completon tme of 17.75, s shown n Fgure The schedule gven n Fgure 10.3 s arranged by shortest job frst. We can show that ths wll always yeld an optmal schedule. Let the jobs n the schedule be j 1, j 2,..., j n. The frst job fnshes n tme t 1. The second job fnshes after t 1 + t 2, and the thrd job fnshes after t 1 + t 2 + t 3. From ths, we see that the total cost, C, of the schedule s

3 (10.1) (10.2) Job Tme j 1 15 j 2 8 j 3 3 j 4 10 Fgure 10.1 Jobs and tmes Fgure 10.2 Schedule #1 Fgure 10.3 Schedule #2 (optmal) Notce that n Equaton (10.2), the frst sum s ndependent of the job orderng, so only the second sum affects the total cost. Suppose that n an orderng there exsts some x > y such that t x < t y. Then a calculaton shows that by swappng j x and j y, the second sum ncreases, decreasng the total cost. Thus, any schedule of jobs n whch the tmes are not monotoncally nonncreasng must be suboptmal. The only schedules left are those n whch the jobs are arranged by smallest runnng tme frst, breakng tes arbtrarly. Ths result ndcates the reason the operatng system scheduler generally gves precedence to shorter jobs. The Multprocessor Case We can extend ths problem to the case of several processors. Agan we have jobs j 1, j 2,..., j n, wth assocated runnng tmes t 1, t 2,..., t n, and a number P of processors. We wll assume wthout loss of generalty that the jobs are ordered, shortest runnng tme frst. As an example, suppose P = 3, and the jobs are as shown n Fgure Fgure 10.5 shows an optmal arrangement to mnmze mean completon tme. Jobs j 1, j 4, and j 7 are run on Processor 1. Processor 2 handles j 2, j 5, and j 8, and Processor 3 runs the remanng jobs. The total tme to completon s 165, for an average of. The algorthm to solve the multprocessor case s to start jobs n order, cyclng through processors. It s not hard to show that no other orderng can do better,

4 although f the number of processors P evenly dvdes the number of jobs n, there are many optmal orderngs. Ths s obtaned by, for each 0 <np, placng each of the jobs j P+1 through j (+1)P on a dfferent processor. In our case, Fgure 10.6 shows a second optmal soluton. Job Tme j 1 3 j 2 5 j 3 6 j 4 10 j 5 11 j 6 14 j 7 15 j 8 18 j 9 20 Fgure 10.4 Jobs and tmes Fgure 10.5 An optmal soluton for the multprocessor case Even f P does not dvde n exactly, there can stll be many optmal solutons, even f all the job tmes are dstnct. We leave further nvestgaton of ths as an exercse. Mnmzng the Fnal Completon Tme We close ths secton by consderng a very smlar problem. Suppose we are only concerned wth when the last job fnshes. In our two examples above, these completon tmes are 40 and 38, respectvely. Fgure 10.7 shows that the mnmum fnal completon tme s 34, and ths clearly cannot be mproved, because every processor s always busy. Although ths schedule does not have mnmum mean completon tme, t has mert n that the completon tme of the entre sequence s earler. If the same user owns all these jobs, then ths s the preferable method of schedulng. Although these problems are very smlar, ths new problem turns out to be NPcomplete; t s just another way of phrasng the knapsack or bn-packng problems, whch we wll encounter later n ths secton. Thus, mnmzng the fnal completon tme s apparently much harder than mnmzng the mean completon tme. Fgure 10.6 A second optmal soluton for the multprocessor case

5 Fgure 10.7 Mnmzng the fnal completon tme Huffman Codes In ths secton, we consder a second applcaton of greedy algorthms, known as fle compresson. The normal ASCII character set conssts of roughly 100 "prntable" characters. In order to dstngush these characters, log 100 = 7 bts are requred. Seven bts allow the representaton of 128 characters, so the ASCII character set adds some other "nonprntable" characters. An eghth bt s added as a party check. The mportant pont, however, s that f the sze of the character set s C, then log C bts are needed n a standard encodng. Suppose we have a fle that contans only the characters a, e,, s, t, plus blank spaces and newlnes. Suppose further, that the fle has ten a's, ffteen e's, twelve 's, three s's, four t's, thrteen blanks, and one newlne. As the table n Fgure 10.8 shows, ths fle requres 174 bts to represent, snce there are 58 characters and each character requres three bts. Character Code Frequency Total Bts a e s t space newlne Total 174 Fgure 10.8 Usng a standard codng scheme In real lfe, fles can be qute large. Many of the very large fles are output of some program and there s usually a bg dsparty between the most frequent and least frequent characters. For nstance, many large data fles have an nordnately large amount of dgts, blanks, and newlnes, but few q's and x's. We mght be nterested n reducng the fle sze n the case where we are transmttng t over a slow phone lne. Also, snce on vrtually every machne dsk space s precous, one mght wonder f t would be possble to provde a better code and reduce the total number of bts requred. The answer s that ths s possble, and a smple strategy acheves 25 percent savngs on typcal large fles and as much as 50 to 60 percent savngs on many

6 large data fles. The general strategy s to allow the code length to vary from character to character and to ensure that the frequently occurrng characters have short codes. Notce that f all the characters occur wth the same frequency, then there are not lkely to be any savngs. The bnary code that represents the alphabet can be represented by the bnary tree shown n Fgure The tree n Fgure 10.9 has data only at the leaves. The representaton of each character can be found by startng at the root and recordng the path, usng a 0 to ndcate the left branch and a 1 to ndcate the rght branch. For nstance, s s reached by gong left, then rght, and fnally rght. Ths s encoded as 011. Ths data structure s sometmes referred to as a tre. If character c s at depth d and occurs f tmes, then the cost of the code s equal to d f. Fgure 10.9 Representaton of the orgnal code n a tree Fgure A slghtly better tree A better code than the one gven n Fgure 10.9 can be obtaned by notcng that the newlne s an only chld. By placng the newlne symbol one level hgher at ts parent, we obtan the new tree n Fgure Ths new tree has cost of 173, but s stll far from optmal. Notce that the tree n Fgure s a full tree: All nodes ether are leaves or have two chldren. An optmal code wll always have ths property, snce otherwse, as we have already seen, nodes wth only one chld could move up a level. If the characters are placed only at the leaves, any sequence of bts can always be decoded unambguously. For nstance, suppose the encoded strng s s not a character code, 01 s not a character code, but 010 represents, so the frst character s. Then 011 follows, gvng a t. Then 11 follows, whch s a newlne. The remander of the code s a, space, t,, e, and newlne. Thus, t does not matter f the character codes are dfferent lengths, as long as no character code s a prefx of another character code. Such an encodng s known as a prefx code. Conversely, f a character s contaned n a nonleaf node, t s no longer possble to guarantee that the decodng wll be unambguous. Puttng these facts together, we see that our basc problem s to fnd the full bnary tree of mnmum total cost (as defned above), where all characters are

7 contaned n the leaves. The tree n Fgure shows the optmal tree for our sample alphabet. As can be seen n Fgure 10.12, ths code uses only 146 bts. Fgure Optmal prefx code Character Code Frequency Total Bts = a e s t space newlne Total 146 Fgure Optmal prefx code Notce that there are many optmal codes. These can be obtaned by swappng chldren n the encodng tree. The man unresolved queston, then, s how the codng tree s constructed. The algorthm to do ths was gven by Huffman n Thus, ths codng system s commonly referred to as a Huffman code. Huffman's Algorthm Huffman's Algorthm Throughout ths secton we wll assume that the number of characters s C. Huffman's algorthm can be descrbed as follows: We mantan a forest of trees. The weght of a tree s equal to the sum of the frequences of ts leaves. C - 1 tmes, select the two trees, T 1 and T 2, of smallest weght, breakng tes arbtrarly, and form a new tree wth subtrees T l and T 2. At the begnnng of the algorthm, there are C sngle-node trees-one for each character. At the end of the algorthm there s one tree, and ths s the optmal Huffman codng tree. A worked example wll make the operaton of the algorthm clear. Fgure shows the ntal forest; the weght of each tree s shown n small type at the root. The two trees of lowest weght are merged together, creatng the forest shown n Fgure We wll name the new root T1, so that future merges can be stated unambguously. We have made s the left chld arbtrarly; any tebreakng procedure can be used. The total weght of the new tree s just the sum of the weghts of the old trees, and can thus be easly computed. It s also a smple matter to create the new tree, snce we merely need to get a new node, set the left and rght ponters, and record the weght.

8 Fgure Intal stage of Huffman's algorthm Fgure Huffman's algorthm after the frst merge Fgure Huffman's algorthm after the second merge Fgure Huffman's algorthm after the thrd merge Now there are sx trees, and we agan select the two trees of smallest weght. These happen to be T1 and t, whch are then merged nto a new tree wth root T2 and weght 8. Ths s shown n Fgure The thrd step merges T2 and a, creatng T3, wth weght = 18. Fgure shows the result of ths operaton. After the thrd merge s completed, the two trees of lowest weght are the snglenode trees representng and the blank space. Fgure shows how these trees are merged nto the new tree wth root T4. The ffth step s to merge the trees wth roots e and T3, snce these trees have the two smallest weghts. The result of ths step s shown n Fgure Fnally, the optmal tree, whch was shown n Fgure 10.11, s obtaned by mergng the two remanng trees. Fgure shows ths optmal tree, wth root T6. Fgure Huffman's algorthm after the fourth merge Fgure Huffman's algorthm after the ffth merge Fgure Huffman's algorthm after the fnal merge We wll sketch the deas nvolved n provng that Huffman's algorthm yelds an optmal code; we wll leave the detals as an exercse. Frst, t s not hard to show by contradcton that the tree must be full, snce we have already seen how a tree that s not full s mproved. Next, we must show that the two least frequent characters and must be the two deepest nodes (although other nodes may be as deep). Agan, ths s easy to show by contradcton, snce f ether or s not a deepest node, then

9 there must be some that s (recall that the tree s full). If s less frequent than, then we can mprove the cost by swappng them n the tree. We can then argue that the characters n any two nodes at the same depth can be swapped wthout affectng optmalty. Ths shows that an optmal tree can always be found that contans the two least frequent symbols as sblngs; thus the frst step s not a mstake. The proof can be completed by usng an nducton argument. As trees are merged, we consder the new character set to be the characters n the roots. Thus, n our example, after four merges, we can vew the character set as consstng of e and the metacharacters T3 and T4. Ths s probably the trckest part of the proof; you are urged to fll n all of the detals. The reason that ths s a greedy algorthm s that at each stage we perform a merge wthout regard to global consderatons. We merely select the two smallest trees. If we mantan the trees n a prorty queue, ordered by weght, then the runnng tme s O(C log C), snce there wll be one buld_heap, 2C - 2 delete_mns, and C - 2 nserts, on a prorty queue that never has more than C elements. A smple mplementaton of the prorty queue, usng a lnked lst, would gve an O (C 2 ) algorthm. The choce of prorty queue mplementaton depends on how large C s. In the typcal case of an ASCII character set, C s small enough that the quadratc runnng tme s acceptable. In such an applcaton, vrtually all the runnng tme wll be spent on the dsk IO requred to read the nput fle and wrte out the compressed verson. There are two detals that must be consdered. Frst, the encodng nformaton must be transmtted at the start of the compressed fle, snce otherwse t wll be mpossble to decode. There are several ways of dong ths; see Exercse For small fles, the cost of transmttng ths table wll overrde any possble savngs n compresson, and the result wll probably be fle expanson. Of course, ths can be detected and the orgnal left ntact. For large fles, the sze of the table s not sgnfcant. The second problem s that as descrbed, ths s a two-pass algorthm. The frst pass collects the frequency data and the second pass does the encodng. Ths s obvously not a desrable property for a program dealng wth large fles. Some alternatves are descrbed n the references Approxmate Bn Packng

10 In ths secton, we wll consder some algorthms to solve the bn packng problem. These algorthms wll run quckly but wll not necessarly produce optmal solutons. We wll prove, however, that the solutons that are produced are not too far from optmal. We are gven n tems of szes s 1, s 2,..., s n. All szes satsfy 0 < s 1. The problem s to pack these tems n the fewest number of bns, gven that each bn has unt capacty. As an example, Fgure shows an optmal packng for an tem lst wth szes 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, 0.8. Fgure Optmal packng for 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, 0.8 There are two versons of the bn packng problem. The frst verson s on-lne bn packng. In ths verson, each tem must be placed n a bn before the next tem can be processed. The second verson s the off-lne bn packng problem. In an off-lne algorthm, we do not need to do anythng untl all the nput has been read. The dstncton between on-lne and off-lne algorthms was dscussed n Secton 8.2. On-lne Algorthms The frst ssue to consder s whether or not an on-lne algorthm can actually always gve an optmal answer, even f t s allowed unlmted computaton. Remember that even though unlmted computaton s allowed, an on-lne algorthm must place an tem before processng the next tem and cannot change ts decson. To show that an on-lne algorthm cannot always gve an optmal soluton, we wll gve t partcularly dffcult data to work on. Consder an nput sequence I 1 of m small tems of weght followed by m large tems of weght, 0 < < It s clear that these tems can be packed n m bns f we place one small tem and one large tem n each bn. Suppose there were an optmal on-lne algorthm A that could perform ths packng. Consder the operaton of algorthm A on the sequence I 2, consstng of only m small tems of weght. I 2 can be packed n [m2] bns. However, A wll place each tem n a separate bn, snce A must yeld the same results on I 2 as t does for the frst half of I 1, snce the frst half of I 1 s exactly the same nput as I 2. Ths means that A wll use twce as many bns as s optmal for I 2. What we have proven s that there s no optmal algorthm for on-lne bn packng.

11 What the argument above shows s that an on-lne algorthm never knows when the nput mght end, so any performance guarantees t provdes must hold at every nstant throughout the algorthm. If we follow the foregong strategy, we can prove the followng. THEOREM There are nputs that force any on-lne bn-packng algorthm to use at least the optmal number of bns. PROOF: Suppose otherwse, and suppose for smplcty that m s even. Consder any onlne algorthm A runnng on the nput sequence I 1, above. Recall that ths sequence conssts of m small tems followed by m large tems. Let us consder what the algorthm A has done after processng the mth tem. Suppose A has already used b bns. At ths pont n the algorthm, the optmal number of bns s m2, because we can place two elements n each bn. Thus we know that, by our assumpton of a performance guarantee. Now consder the performance of algorthm A after all tems have been packed. All bns created after the bth bn must contan exactly one tem, snce all small tems are placed n the frst b bns, and two large tems wll not ft n a bn. Snce the frst b bns can have at most two tems each, and the remanng bns have one tem each, we see that packng 2m tems wll requre at least 2m - b bns. Snce the 2m tems can be optmally packed usng m bns, our performance guarantee assures us that. Fgure Next ft for 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, 0.8 The frst nequalty mples that, and the second nequalty mples that, whch s a contradcton. Thus, no on-lne algorthm can guarantee that t wll produce a packng wth less than the optmal number of bns. There are three smple algorthms that guarantee that the number of bns used s no more than twce optmal. There are also qute a few more complcated algorthms wth better guarantees. Next Ft Probably the smplest algorthm s next ft. When processng any tem, we check to see whether t fts n the same bn as the last tem. If t does, t s placed there;

12 otherwse, a new bn s created. Ths algorthm s ncredbly smple to mplement and runs n lnear tme. Fgure shows the packng produced for the same nput as Fgure Not only s next ft smple to program, ts worst-case behavor s also easy to analyze. THEOREM Let m be the optmal number of bns requred to pack a lst I of tems. Then next ft never uses more than 2m bns. There exst sequences such that next ft uses 2m - 2 bns. PROOF: Consder any adjacent bns B j and B j + 1. The sum of the szes of all tems n B j and B j + 1 must be larger than 1, snce otherwse all of these tems would have been placed n B j. If we apply ths result to all pars of adjacent bns, we see that at most half of the space s wasted. Thus next ft uses at most twce the number of bns. To see that ths bound s tght, suppose that the n tems have sze s = 0.5 f s odd and s = 2n f s even. Assume n s dvsble by 4. The optmal packng, shown n Fgure 10.22, conssts of n4 bns, each contanng 2 elements of sze 0.5, and one bn contanng the n2 elements of sze 2n, for a total of (n4) + 1. Fgure shows that next ft uses n2 bns. Thus, next ft can be forced to use almost twce as many bns as optmal. Fgure Optmal packng for 0.5, 2n, 0.5, 2n, 0.5, 2n,... Fgure Next ft packng for 0.5, 2n, 0.5, 2n, 0.5, 2n,... Frst Ft Although next ft has a reasonable performance guarantee, t performs poorly n practce, because t creates new bns when t does not need to. In the sample run, t could have placed the tem of sze 0.3 n ether B 1 or B 2, rather than create a new bn. The frst ft strategy s to scan the bns n order and place the new tem n the frst bn that s large enough to hold t. Thus, a new bn s created only when the results of prevous placements have left no other alternatve. Fgure shows the packng that results from frst ft on our standard nput. A smple method of mplementng frst ft would process each tem by scannng down the lst of bns sequentally. Ths would take O(n 2 ). It s possble to mplement frst ft to run n O(n log n); we leave ths as an exercse.

13 A moment's thought wll convnce you that at any pont, at most one bn can be more than half empty, snce f a second bn were also half empty, ts contents would ft nto the frst bn. Thus, we can mmedately conclude that frst ft guarantees a soluton wth at most twce the optmal number of bns. Fgure Frst ft for 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, 0.8 On the other hand, the bad case that we used n the proof of next ft's performance bound does not apply for frst ft. Thus, one mght wonder f a better bound can be proven. The answer s yes, but the proof s complcated. THEOREM Let m be the optmal number of bns requred to pack a lst I of tems. Then frst ft never uses more than bns. There exst sequences such that frst ft uses bns. PROOF: See the references at the end of the chapter. An example where frst ft does almost as poorly as the prevous theorem would ndcate s shown n Fgure The nput conssts of 6m tems of sze, followed by 6m tems of sze, followed by 6m tems of sze. One smple packng places one tem of each sze n a bn and requres 6m bns. Frst ft requres 10m bns. When frst ft s run on a large number of tems wth szes unformly dstrbuted between 0 and 1, emprcal results show that frst ft uses roughly 2 percent more bns than optmal. In many cases, ths s qute acceptable. Fgure A case where frst ft uses 10m bns nstead of 6m Fgure Best ft for 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, 0.8 Frst Ft Although next ft has a reasonable performance guarantee, t performs poorly n practce, because t creates new bns when t does not need to. In the sample run, t could have placed the tem of sze 0.3 n ether B 1 or B 2, rather than create a new bn. The frst ft strategy s to scan the bns n order and place the new tem n the frst bn that s large enough to hold t. Thus, a new bn s created only when the

14 results of prevous placements have left no other alternatve. Fgure shows the packng that results from frst ft on our standard nput. A smple method of mplementng frst ft would process each tem by scannng down the lst of bns sequentally. Ths would take O(n 2 ). It s possble to mplement frst ft to run n O(n log n); we leave ths as an exercse. A moment's thought wll convnce you that at any pont, at most one bn can be more than half empty, snce f a second bn were also half empty, ts contents would ft nto the frst bn. Thus, we can mmedately conclude that frst ft guarantees a soluton wth at most twce the optmal number of bns. Fgure Frst ft for 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, 0.8 On the other hand, the bad case that we used n the proof of next ft's performance bound does not apply for frst ft. Thus, one mght wonder f a better bound can be proven. The answer s yes, but the proof s complcated. THEOREM Let m be the optmal number of bns requred to pack a lst I of tems. Then frst ft never uses more than bns. There exst sequences such that frst ft uses bns. PROOF: See the references at the end of the chapter. An example where frst ft does almost as poorly as the prevous theorem would ndcate s shown n Fgure The nput conssts of 6m tems of sze, followed by 6m tems of sze, followed by 6m tems of sze. One smple packng places one tem of each sze n a bn and requres 6m bns. Frst ft requres 10m bns. When frst ft s run on a large number of tems wth szes unformly dstrbuted between 0 and 1, emprcal results show that frst ft uses roughly 2 percent more bns than optmal. In many cases, ths s qute acceptable. Fgure A case where frst ft uses 10m bns nstead of 6m Fgure Best ft for 0.2, 0.5, 0.4, 0.7, 0.1, 0.3, Dvde and Conquer

15 Another common technque used to desgn algorthms s dvde and conquer. Dvde and conquer algorthms consst of two parts: Dvde: Smaller problems are solved recursvely (except, of course, base cases). Conquer: The soluton to the orgnal problem s then formed from the solutons to the subproblems. Tradtonally, routnes n whch the text contans at least two recursve calls are called dvde and conquer algorthms, whle routnes whose text contans only one recursve call are not. We generally nsst that the subproblems be dsjont (that s, essentally nonoverlappng). Let us revew some of the recursve algorthms that have been covered n ths text. We have already seen several dvde and conquer algorthms. In Secton 2.4.3, we saw an O (n log n) soluton to the maxmum subsequence sum problem. In Chapter 4, we saw lnear-tme tree traversal strateges. In Chapter 7, we saw the classc examples of dvde and conquer, namely mergesort and qucksort, whch have O (n log n) worst-case and average-case bounds, respectvely. We have also seen several examples of recursve algorthms that probably do not classfy as dvde and conquer, but merely reduce to a sngle smpler case. In Secton 1.3, we saw a smple routne to prnt a number. In Chapter 2, we used recurson to perform effcent exponentaton. In Chapter 4, we examned smple search routnes for bnary search trees. In Secton 6.6, we saw smple recurson used to merge leftst heaps. In Secton 7.7, an algorthm was gven for selecton that takes lnear average tme. The dsjont set fnd operaton was wrtten recursvely n Chapter 8. Chapter 9 showed routnes to recover the shortest path n Djkstra's algorthm and other procedures to perform depth-frst search n graphs. None of these algorthms are really dvde and conquer algorthms, because only one recursve call s performed. We have also seen, n Secton 2.4, a very bad recursve routne to compute the Fbonacc numbers. Ths could be called a dvde and conquer algorthm, but t s terrbly neffcent, because the problem really s not dvded at all. In ths secton, we wll see more examples of the dvde and conquer paradgm. Our frst applcaton s a problem n computatonal geometry. Gven n ponts n a plane, we wll show that the closest par of ponts can be found n O(n log n) tme. The exercses descrbe some other problems n computatonal geometry whch can be solved by dvde and conquer. The remander of the secton shows some extremely nterestng, but mostly theoretcal, results. We provde an algorthm whch solves the selecton problem n O(n) worst-case tme. We also show that 2 n-bt numbers can be multpled n o(n 2 ) operatons and that two n x n matrces

16 can be multpled n o(n 3 ) operatons. Unfortunately, even though these algorthms have better worst-case bounds than the conventonal algorthms, none are practcal except for very large nputs Runnng Tme of Dvde and Conquer Algorthms Runnng Tme of Dvde and Conquer Algorthms All the effcent dvde and conquer algorthms we wll see dvde the problems nto subproblems, each of whch s some fracton of the orgnal problem, and then perform some addtonal work to compute the fnal answer. As an example, we have seen that mergesort operates on two problems, each of whch s half the sze of the orgnal, and then uses O(n) addtonal work. Ths yelds the runnng tme equaton (wth approprate ntal condtons) T(n) = 2T(n2) + O(n) We saw n Chapter 7 that the soluton to ths equaton s O(n log n). The followng theorem can be used to determne the runnng tme of most dvde and conquer algorthms. THEOREM The soluton to the equaton T(n) = at(nb) + (n k ), where a 1 and b > 1, s PROOF: Followng the analyss of mergesort n Chapter 7, we wll assume that n s a power of b; thus, let n = b m. Then nb = b m-l and n k = (b m ) k = b mk = b km = (b k ) m. Let us assume T(1) = 1, and gnore the constant factor n T(b m ) = at(b m-l )+(b k ) m If we dvde through by am, we obtan the equaton (n k ). Then we have (10.3) We can apply ths equaton for other values of m, obtanng (10.4) (10.5)

17 (10.6) We use our standard trck of addng up the telescopng equatons (10.3) through (10.6). Vrtually all the terms on the left cancel the leadng terms on the rght, yeldng (10.7) (10.8) Thus (10.9) If a > bk, then the sum s a geometrc seres wth rato smaller than 1. Snce the sum of nfnte seres would converge to a constant, ths fnte sum s also bounded by a constant, and thus Equaton (10.10) apples: T(n) = O(am) = O(alogb n) O = O(nlogb a) (10.10) If a = bk, then each term n the sum s 1. Snce the sum contans 1 + log n terms and a = bk mples that log a = k, b b T(n) = O(am log n) = O(nlog a log n) = O(nk log n) b b b b = O (nk log n) (10.11) Fnally, f a < bk, then the terms n the geometrc seres are larger than 1, and the second formula n Secton apples. We obtan (10.12) provng the last case of the theorem. As an example, mergesort has a = b = 2 and k = 1. The second case apples, gvng the answer O(n log n). If we solve three problems, each of whch s half the orgnal sze, and combne the solutons wth O(n) addtonal work, then a = 3, b = 2 and k = 1. Case 1 apples here, gvng a bound of O(nlog 3) = O(n1.59). An algorthm that solved three half-szed problems, but requred O(n2) work to merge the soluton, would 2 have an O(n2) runnng tme, snce the thrd case would apply. There are two mportant cases that are not covered by Theorem We state two more theorems, leavng the proofs as exercses. Theorem 10.7 generalzes the prevous theorem. THEOREM The soluton to the equaton T(n) = at(nb) + (nk logp n), where a 1, b > 1, and p 0 s THEOREM 10.8., then the soluton to the equaton s T(n) = O(n) Closest-Ponts Problem

18 The nput to our frst problem s a lst P of ponts n a plane. If p = (x, y ) and p = (x, y ), then the Eucldean dstance between p l and p 2 s l [(x - x )2 + (y - y )2]l2. We are requred to fnd the closest par of ponts. It s possble that two ponts have the same poston; n that case 1 2 l 2 that par s the closest, wth dstance zero. If there are n ponts, then there are n (n - 1)2 pars of dstances. We can check all of these, obtanng a very short program, but at the expense of an O(n2) algorthm. Snce ths approach s just an exhaustve search, we should expect to do better. Let us assume that the ponts have been sorted by x coordnate. At worst, ths adds O(n log n) to the fnal tme bound. Snce we wll show an O(n log n) bound for the entre algorthm, ths sort s essentally free, from a complexty standpont. Fgure shows a small sample pont set P. Snce the ponts are sorted by x coordnate, we can draw an magnary vertcal lne that parttons the ponts set nto two halves, P and P. Ths s certanly smple to do. Now we have almost exactly the same stuaton as we saw n l r the maxmum subsequence sum problem n Secton Ether the closest ponts are both n P, or they are both n P, or one s n P and the l r l other s n P. Let us call these dstances d, d, and d. Fgure shows the partton of the pont set and these three dstances. r l r c We can compute d and d recursvely. The problem, then, s to compute d. Snce we would lke an O(n log n) soluton, we must be able to l r c compute d wth only O(n) addtonal work. We have already seen that f a procedure conssts of two half-szed recursve calls and O(n) c addtonal work, then the total tme wll be O(n log n). Let = mn(d l, d r ). The frst observaton s that we only need to compute d c f d c mproves on. If d c s such a dstance, then the two ponts that defne d c must be wthn of the dvdng lne; we wll refer to ths area as a strp. As shown n Fgure 10.31, ths observaton lmts the number of ponts that need to be consdered (n our case, = d r ). There are two strateges that can be tred to compute d c. For large pont sets that are unformly dstrbuted, the number of ponts that are expected to be n the strp s very small. Indeed, t s easy to argue that only ponts are n the strp on average. Thus, we could perform a brute force calculaton on these ponts n O(n) tme. The pseudocode n Fgure mplements ths strategy, assumng the C conventon that the ponts are ndexed startng at 0. Fgure A small pont set Fgure P parttoned nto P 1 and P 2 ; shortest dstances are shown Fgure Two-lane strp, contanng all ponts consdered for d strp c Ponts are all n the strp for( =0; <NUM_POINTS_IN_STRIP; ++ ) for( j=+1; j<num_points_in_strip; j++ ) f( dst( p,p j ) < ) = dst( p,p j );

19 Fgure Brute force calculaton of mn(, d ) c Ponts are all n the strp and sorted by y coordnate for( =0; <NUM_POINTS_IN_STRIP; ++ ) for( j=+1; j<num_points_in_strip; j++ ) f ( p and pj 's coordnates dffer by more than ) break; goto next p else f( dst( p, p j ) < ) = dst( p, p j ); Fgure Refned calculaton of mn(, d c ) In the worst case, all the ponts could be n the strp, so ths strategy does not always work n lnear tme. We can mprove ths algorthm wth the followng observaton: The y coordnates of the two ponts that defne d c can dffer by at most. Otherwse, d c >. Suppose that the ponts n the strp are sorted by ther y coordnates. Therefore, f p and p j 's y coordnates dffer by more than, then we can proceed to p + l. Ths smple modfcaton s mplemented n Fgure Ths extra test has a sgnfcant effect on the runnng tme, because for each p only a few ponts p j are examned before p 's and p j 's y coordnates dffer by more than and force an ext from the nner for loop. Fgure shows, for nstance, that for pont p 3, only the two ponts p 4 and p 5 le n the strp wthn vertcal dstance. Fgure Only p 4 and p 5 are consdered n the second for loop Fgure At most eght ponts ft n the rectangle; there are two coordnates shared by two ponts each In the worst case, for any pont p, at most 7 ponts p j are consdered. Ths s because these ponts must le ether n the by square n the left half of the strp or n the by square n the rght half of the strp. On the other hand, all the ponts n each by square are separated by at least. In the worst case, each square contans four ponts, one at each corner. One of these ponts s p, leavng at most seven ponts to be consdered. Ths worst-case stuaton s shown n Fgure Notce that even though p l2 and p r1 have the same coordnates, they could be dfferent ponts. For the actual analyss, t s only mportant that the number of ponts n the by 2 rectangle be O(1), and ths much s certanly clear.

20 Because at most seven ponts are consdered for each p, the tme to compute a d c that s better than s O(n). Thus, we appear to have an O(n log n) soluton to the closest-ponts problem, based on the two half-szed recursve calls plus the lnear extra work to combne the two results. However, we do not qute have an O (n log n) soluton yet. The problem s that we have assumed that a lst of ponts sorted by y coordnate s avalable. If we perform ths sort for each recursve call, then we have O(n log n) extra work: ths gves an O(n log2 n) algorthm. Ths s not all that bad, especally when compared to the brute force O(n2). However, t s not hard to reduce the work for each recursve call to O(n), thus ensurng an O(n log n) algorthm. We wll mantan two lsts. One s the pont lst sorted by x coordnate, and the other s the pont lst sorted by y coordnate. We wll call these lsts P and Q, respectvely. These can be obtaned by a preprocessng sortng step at cost O(n log n) and thus does not affect the tme bound. P and Q are the lsts passed to the left-half recursve call, and P and Q are the lsts passed to the rght-half recursve call. We have already l l r r seen that P s easly splt n the mddle. Once the dvdng lne s known, we step through Q sequentally, placng each element n Q or Q, as l r approprate. It s easy to see that Q and Q wll be automatcally sorted by y coordnate. When the recursve calls return, we scan through the l r Q lst and dscard all the ponts whose x coordnates are not wthn the strp. Then Q contans only ponts n the strp, and these ponts are guaranteed to be sorted by ther y coordnates. Ths strategy ensures that the entre algorthm s O (n log n), because only O (n) extra work s performed The Selecton Problem The selecton problem requres us to fnd the kth smallest element n a lst S of n elements. Of partcular nterest s the specal case of fndng the medan. Ths occurs when k = n2. In Chapters 1, 6, 7 we have seen several solutons to the selecton problem. The soluton n Chapter 7 uses a varaton of qucksort and runs n O(n) average tme. Indeed, t s descrbed n Hoare's orgnal paper on qucksort. Although ths algorthm runs n lnear average tme, t has a worst case of O (n2). Selecton can easly be solved n O(n log n) worst-case tme by sortng the elements, but for a long tme t was unknown whether or not selecton could be accomplshed n O(n) worst-case tme. The quckselect algorthm outlned n Secton s qute effcent n practce, so ths was mostly a queston of theoretcal nterest. Recall that the basc algorthm s a smple recursve strategy. Assumng that n s larger than the cutoff pont where elements are smply sorted, an element v, known as the pvot, s chosen. The remanng elements are placed nto two sets, S and S. S contans elements that are guaranteed to be no larger than v, and S 2 contans elements that are no smaller than v. Fnally, f k S 1, then the kth smallest element n S can be found by recursvely computng the kth smallest element n S. If k = S + 1, then the pvot s the kth smallest element. 1 1 Otherwse, the kth smallest element n S s the (k - S -1 )st smallest element n S. The man dfference between ths algorthm and 1 2 qucksort s that there s only one subproblem to solve nstead of two. In order to obtan a lnear algorthm, we must ensure that the subproblem s only a fracton of the orgnal and not merely only a few elements smaller than the orgnal. Of course, we can always fnd such an element f we are wllng to spend some tme to do so. The dffcult problem s that we cannot spend too much tme fndng the pvot. For qucksort, we saw that a good choce for pvot was to pck three elements and use ther medan. Ths gves some expectaton that the pvot s not too bad, but does not provde a guarantee. We could choose 21 elements at random, sort them n constant tme, use the 11th largest as pvot, and get a pvot that s even more lkely to be good. However, f these 21 elements were the 21 largest, then the pvot would stll be poor. Extendng ths, we could use up to O (n log n) elements, sort them usng heapsort n O(n) total tme, and be almost certan,

21 from a statstcal pont of vew, of obtanng a good pvot. In the worst case, however, ths does not work because we mght select the O (n log n) largest elements, and then the pvot would be the [n - O(n log n)]th largest element, whch s not a constant fracton of n. The basc dea s stll useful. Indeed, we wll see that we can use t to mprove the expected number of comparsons that quckselect makes. To get a good worst case, however, the key dea s to use one more level of ndrecton. Instead of fndng the medan from a sample of random elements, we wll fnd the medan from a sample of medans. The basc pvot selecton algorthm s as follows: 1. Arrange the n elements nto n5 groups of 5 elements, gnorng the (at most four) extra elements. 2. Fnd the medan of each group. Ths gves a lst M of n5 medans. 3. Fnd the medan of M. Return ths as the pvot, v. We wll use the term medan-of-medan-of-fve parttonng to descrbe the quckselect algorthm that uses the pvot selecton rule gven above. We wll now show that medan-of-medan-of-fve parttonng guarantees that each recursve subproblem s at most roughly 70 percent as large as the orgnal. We wll also show that the pvot can be computed quckly enough to guarantee an O (n) runnng tme for the entre selecton algorthm. Let us assume for the moment that n s dvsble by 5, so there are no extra elements. Suppose also that n5 s odd, so that the set M contans an odd number of elements. Ths provdes some symmetry, as we shall see. We are thus assumng, for convenence, that n s of the form 10k + 5. We wll also assume that all the elements are dstnct. The actual algorthm must make sure to handle the case where ths s not true. Fgure shows how the pvot mght be chosen when n = 45. In Fgure 10.36, v represents the element whch s selected by the algorthm as pvot. Snce v s the medan of nne elements, and we are assumng that all elements are dstnct, there must be four medans that are larger than v and four that are smaller. We denote these by L and S, respectvely. Consder a group of fve elements wth a large medan (type L). The medan of the group s smaller than two elements n the group and larger than two elements n the group. We wll let H represent the huge elements. These are elements that are known to be larger than a large medan. Smlarly, T represents the tny elements, whch are smaller than a small medan. There are 10 elements of type H: Two are n each of the groups wth an L type medan, and two elements are n the same group as v. Smlarly, there are 10 elements of type T. Fgure How the pvot s chosen Elements of type L or H are guaranteed to be larger than v, and elements of type S or T are guaranteed to be smaller than v. There are thus guaranteed to be 14 large and 14 small elements n our problem. Therefore, a recursve call could be on at most = 30 elements. Let us extend ths analyss to general n of the form 10k + 5. In ths case, there are k elements of type L and k elements of type S. There are 2k + 2 elements of type H, and also 2k + 2 elements of type T. Thus, there are 3k + 2 elements that are guaranteed to be larger than v and 3k + 2 elements that are guaranteed to be smaller. Thus, n ths case, the recursve call can contan at most 7k + 2 < 0.7n elements. If n s not of the form 10k + 5, smlar arguments can be made wthout affectng the basc result. It remans to bound the runnng tme to obtan the pvot element. There are two basc steps. We can fnd the medan of fve elements n constant tme. For nstance, t s not hard to sort fve elements n eght comparsons. We must do ths n5 tmes, so ths step takes O(n) tme. We must then compute the medan of a group of n5 elements. The obvous way to do ths s to sort the group

22 and return the element n the mddle. But ths takes O( n5 log n5 ) = O(n log n) tme, so ths does not work. The soluton s to call the selecton algorthm recursvely on the n5 elements. Ths completes the descrpton of the basc algorthm. There are stll some detals that need to be flled n f an actual mplementaton s desred. For nstance, duplcates must be handled correctly, and the algorthm needs a cutoff large enough to ensure that the recursve calls make progress. There s qute a large amount of overhead nvolved, and ths algorthm s not practcal at all, so we wll not descrbe any more of the detals that need to be consdered. Even so, from a theoretcal standpont, the algorthm s a major breakthrough, because, as the followng theorem shows, the runnng tme s lnear n the worst case. THEOREM The runnng tme of quckselect usng medan-of-medan-of-fve parttonng s O(n). PROOF: The algorthm conssts of two recursve calls of sze 0.7n and 0.2n, plus lnear extra work. By Theorem 10.8, the runnng tme s lnear. Reducng the Average Number of Comparsons Reducng the Average Number of Comparsons Dvde and conquer can also be used to reduce the expected number of comparsons requred by the selecton algorthm. Let us look at a concrete example. Suppose we have a group S of 1,000 numbers and are lookng for the 100th smallest number, whch we wll call x. We choose a subset S' of S consstng of 100 numbers. We would expect that the value of x s smlar n sze to the 10th smallest number n S'. More specfcally, the ffth smallest number n S' s almost certanly less than x, and the 15th smallest number n S' s almost certanly greater than x. More generally, a sample S' of s elements s chosen from the n elements. Let be some number, whch we wll choose later so as to mnmze the average number of comparsons used by the procedure. We fnd the (v 1 = ksn - )th and (v2 = ksn + )th smallest elements n S'. Almost certanly, the kth smallest element n S wll fall between v 1 and v 2, so we are left wth a selecton problem on 2 elements. Wth low probablty, the kth smallest element does not fall n ths range, and we have consderable work to do. However, wth a good choce of s and, we can ensure, by the laws of probablty, that the second case does not adversely affect the total work. If an analyss s performed, we fnd that f s = n23 log13 n and = n13 log23 n, then the expected number of comparsons s n + k + O(n23 log13 n), whch s optmal except for the low-order term. (If k > n2, we can consder the symmetrc problem of fndng the (n - k)th largest element.) Most of the analyss s easy to do. The last term represents the cost of performng the two selectons to determne v and v. The average 1 2 cost of the parttonng, assumng a reasonably clever strategy, s equal to n plus the expected rank of v 2 n S, whch s n + k + O(n s). If the kth element wnds up n S', the cost of fnshng the algorthm s equal to the cost of selecton on S', namely O(s). If the kth smallest element doesn't wnd up n S', the cost s O(n). However, s and have been chosen to guarantee that ths happens wth very low probablty o(1n), so the expected cost of ths possblty s o(1), whch s a term that goes to zero as n gets large. An exact calculaton s left as Exercse

23 Ths analyss shows that fndng the medan requres about 1.5n comparsons on average. Of course, ths algorthm requres some floatngpont arthmetc to compute s, whch can slow down the algorthm on some machnes. Even so, experments have shown that f correctly mplemented, ths algorthm compares favorably wth the quckselect mplementaton n Chapter Theoretcal Improvements for Arthmetc Problems In ths secton we descrbe a dvde and conquer algorthm that multples two n-dgt numbers. Our prevous model of computaton assumed that multplcaton was done n constant tme, because the numbers were small. For large numbers, ths assumpton s no longer vald. If we measure multplcaton n terms of the sze of numbers beng multpled, then the natural multplcaton algorthm takes quadratc tme. The dvde and conquer algorthm runs n subquadratc tme. We also present the classc dvde and conquer algorthm that multples two n by n matrces n subcubc tme. Multplyng Integers Matrx Multplcaton Multplyng Integers Suppose we want to multply two n-dgt numbers x and y. If exactly one of x and y s negatve, then the answer s negatve; otherwse t s postve. Thus, we can perform ths check and then assume that x, y 0. The algorthm that almost everyone uses when multplyng by hand requres (n2) operatons, because each dgt n x s multpled by each dgt n y. If x = 61,438,521 and y = 94,736,407, xy = 5,820,464,730,934,047. Let us break x and y nto two halves, consstng of the most sgnfcant and least sgnfcant dgts, respectvely. Then x l = 6,143, x r = 8,521, y l = 9,473, and y r = 6,407. We also have x = x l x r and y = y l y r. It follows that xy = x l y l (x l y r + x r y l )104 + x r y r Notce that ths equaton conssts of four multplcatons, x l y l, x l y r, x r y l, and x r y r, whch are each half the sze of the orgnal problem (n2 dgts). The multplcatons by 108 and 104 amount to the placng of zeros. Ths and the subsequent addtons add only O(n) addtonal work. If we perform these four multplcatons recursvely usng ths algorthm, stoppng at an approprate base case, then we obtan the recurrence T(n) = 4T(n2) + O(n) From Theorem 10.6, we see that T(n) = O(n2), so, unfortunately, we have not mproved the algorthm. To acheve a subquadratc algorthm, we must use less than four recursve calls. The key observaton s that x l y r + x r y l = (x l - x r )(y r - y l ) + x l y l + x r y r Thus, nstead of usng two multplcatons to compute the coeffcent of 104, we can use one multplcaton, plus the result of two multplcatons that have already been performed. Fgure shows how only three recursve subproblems need to be solved. It s easy to see that now the recurrence equaton satsfes T(n) = 3T(n2) + O(n), and so we obtan T(n) = O(nlog23) = O(n1.59). To complete the algorthm, we must have a base case, whch can be solved wthout recurson. Fgure The dvde and conquer algorthm n acton When both numbers are one-dgt, we can do the multplcaton by table lookup. If one number has zero dgts, then we return zero. In practce, f we were to use ths algorthm, we would choose the base case to be that whch s most convenent for the machne.

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Steve Setz Wnter 2009 Qucksort Qucksort uses a dvde and conquer strategy, but does not requre the O(N) extra space that MergeSort does. Here s the

More information

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss. Today s Outlne Sortng Chapter 7 n Wess CSE 26 Data Structures Ruth Anderson Announcements Wrtten Homework #6 due Frday 2/26 at the begnnng of lecture Proect Code due Mon March 1 by 11pm Today s Topcs:

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Bran Curless Sprng 2008 Announcements (5/14/08) Homework due at begnnng of class on Frday. Secton tomorrow: Graded homeworks returned More dscusson

More information

Insertion Sort. Divide and Conquer Sorting. Divide and Conquer. Mergesort. Mergesort Example. Auxiliary Array

Insertion Sort. Divide and Conquer Sorting. Divide and Conquer. Mergesort. Mergesort Example. Auxiliary Array Inserton Sort Dvde and Conquer Sortng CSE 6 Data Structures Lecture 18 What f frst k elements of array are already sorted? 4, 7, 1, 5, 1, 16 We can shft the tal of the sorted elements lst down and then

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

Sorting: The Big Picture. The steps of QuickSort. QuickSort Example. QuickSort Example. QuickSort Example. Recursive Quicksort

Sorting: The Big Picture. The steps of QuickSort. QuickSort Example. QuickSort Example. QuickSort Example. Recursive Quicksort Sortng: The Bg Pcture Gven n comparable elements n an array, sort them n an ncreasng (or decreasng) order. Smple algorthms: O(n ) Inserton sort Selecton sort Bubble sort Shell sort Fancer algorthms: O(n

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

CE 221 Data Structures and Algorithms

CE 221 Data Structures and Algorithms CE 1 ata Structures and Algorthms Chapter 4: Trees BST Text: Read Wess, 4.3 Izmr Unversty of Economcs 1 The Search Tree AT Bnary Search Trees An mportant applcaton of bnary trees s n searchng. Let us assume

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Sorting. Sorting. Why Sort? Consistent Ordering

Sorting. Sorting. Why Sort? Consistent Ordering Sortng CSE 6 Data Structures Unt 15 Readng: Sectons.1-. Bubble and Insert sort,.5 Heap sort, Secton..6 Radx sort, Secton.6 Mergesort, Secton. Qucksort, Secton.8 Lower bound Sortng Input an array A of data

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Desgn and Analyss of Algorthms Heaps and Heapsort Reference: CLRS Chapter 6 Topcs: Heaps Heapsort Prorty queue Huo Hongwe Recap and overvew The story so far... Inserton sort runnng tme of Θ(n 2 ); sorts

More information

Priority queues and heaps Professors Clark F. Olson and Carol Zander

Priority queues and heaps Professors Clark F. Olson and Carol Zander Prorty queues and eaps Professors Clark F. Olson and Carol Zander Prorty queues A common abstract data type (ADT) n computer scence s te prorty queue. As you mgt expect from te name, eac tem n te prorty

More information

CS221: Algorithms and Data Structures. Priority Queues and Heaps. Alan J. Hu (Borrowing slides from Steve Wolfman)

CS221: Algorithms and Data Structures. Priority Queues and Heaps. Alan J. Hu (Borrowing slides from Steve Wolfman) CS: Algorthms and Data Structures Prorty Queues and Heaps Alan J. Hu (Borrowng sldes from Steve Wolfman) Learnng Goals After ths unt, you should be able to: Provde examples of approprate applcatons for

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Brave New World Pseudocode Reference

Brave New World Pseudocode Reference Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

Greedy Technique - Definition

Greedy Technique - Definition Greedy Technque Greedy Technque - Defnton The greedy method s a general algorthm desgn paradgm, bult on the follong elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon:

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

CS1100 Introduction to Programming

CS1100 Introduction to Programming Factoral (n) Recursve Program fact(n) = n*fact(n-) CS00 Introducton to Programmng Recurson and Sortng Madhu Mutyam Department of Computer Scence and Engneerng Indan Insttute of Technology Madras nt fact

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Intro. Iterators. 1. Access

Intro. Iterators. 1. Access Intro Ths mornng I d lke to talk a lttle bt about s and s. We wll start out wth smlartes and dfferences, then we wll see how to draw them n envronment dagrams, and we wll fnsh wth some examples. Happy

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

CHAPTER 2 DECOMPOSITION OF GRAPHS

CHAPTER 2 DECOMPOSITION OF GRAPHS CHAPTER DECOMPOSITION OF GRAPHS. INTRODUCTION A graph H s called a Supersubdvson of a graph G f H s obtaned from G by replacng every edge uv of G by a bpartte graph,m (m may vary for each edge by dentfyng

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

3D vector computer graphics

3D vector computer graphics 3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres

More information

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005 Exercses (Part 4) Introducton to R UCLA/CCPR John Fox, February 2005 1. A challengng problem: Iterated weghted least squares (IWLS) s a standard method of fttng generalzed lnear models to data. As descrbed

More information

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE

ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE Yordzhev K., Kostadnova H. Інформаційні технології в освіті ON SOME ENTERTAINING APPLICATIONS OF THE CONCEPT OF SET IN COMPUTER SCIENCE COURSE Yordzhev K., Kostadnova H. Some aspects of programmng educaton

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

On Some Entertaining Applications of the Concept of Set in Computer Science Course

On Some Entertaining Applications of the Concept of Set in Computer Science Course On Some Entertanng Applcatons of the Concept of Set n Computer Scence Course Krasmr Yordzhev *, Hrstna Kostadnova ** * Assocate Professor Krasmr Yordzhev, Ph.D., Faculty of Mathematcs and Natural Scences,

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

Sorting and Algorithm Analysis

Sorting and Algorithm Analysis Unt 7 Sortng and Algorthm Analyss Computer Scence S-111 Harvard Unversty Davd G. Sullvan, Ph.D. Sortng an Array of Integers 0 1 2 n-2 n-1 arr 15 7 36 40 12 Ground rules: sort the values n ncreasng order

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS Copng wth NP-completeness 11. APPROXIMATION ALGORITHMS load balancng center selecton prcng method: vertex cover LP roundng: vertex cover generalzed load balancng knapsack problem Q. Suppose I need to solve

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Array transposition in CUDA shared memory

Array transposition in CUDA shared memory Array transposton n CUDA shared memory Mke Gles February 19, 2014 Abstract Ths short note s nspred by some code wrtten by Jeremy Appleyard for the transposton of data through shared memory. I had some

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Fast Computation of Shortest Path for Visiting Segments in the Plane

Fast Computation of Shortest Path for Visiting Segments in the Plane Send Orders for Reprnts to reprnts@benthamscence.ae 4 The Open Cybernetcs & Systemcs Journal, 04, 8, 4-9 Open Access Fast Computaton of Shortest Path for Vstng Segments n the Plane Ljuan Wang,, Bo Jang

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Harvard University CS 101 Fall 2005, Shimon Schocken. Assembler. Elements of Computing Systems 1 Assembler (Ch. 6)

Harvard University CS 101 Fall 2005, Shimon Schocken. Assembler. Elements of Computing Systems 1 Assembler (Ch. 6) Harvard Unversty CS 101 Fall 2005, Shmon Schocken Assembler Elements of Computng Systems 1 Assembler (Ch. 6) Why care about assemblers? Because Assemblers employ some nfty trcks Assemblers are the frst

More information

Searching & Sorting. Definitions of Search and Sort. Linear Search in C++ Linear Search. Week 11. index to the item, or -1 if not found.

Searching & Sorting. Definitions of Search and Sort. Linear Search in C++ Linear Search. Week 11. index to the item, or -1 if not found. Searchng & Sortng Wee 11 Gadds: 8, 19.6,19.8 CS 5301 Sprng 2014 Jll Seaman 1 Defntons of Search and Sort Search: fnd a gven tem n a lst, return the ndex to the tem, or -1 f not found. Sort: rearrange the

More information

A SYSTOLIC APPROACH TO LOOP PARTITIONING AND MAPPING INTO FIXED SIZE DISTRIBUTED MEMORY ARCHITECTURES

A SYSTOLIC APPROACH TO LOOP PARTITIONING AND MAPPING INTO FIXED SIZE DISTRIBUTED MEMORY ARCHITECTURES A SYSOLIC APPROACH O LOOP PARIIONING AND MAPPING INO FIXED SIZE DISRIBUED MEMORY ARCHIECURES Ioanns Drosts, Nektaros Kozrs, George Papakonstantnou and Panayots sanakas Natonal echncal Unversty of Athens

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

K-means and Hierarchical Clustering

K-means and Hierarchical Clustering Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n gvng your own lectures. Feel free to use these sldes verbatm, or to modfy them to ft your

More information

AMath 483/583 Lecture 21 May 13, Notes: Notes: Jacobi iteration. Notes: Jacobi with OpenMP coarse grain

AMath 483/583 Lecture 21 May 13, Notes: Notes: Jacobi iteration. Notes: Jacobi with OpenMP coarse grain AMath 483/583 Lecture 21 May 13, 2011 Today: OpenMP and MPI versons of Jacob teraton Gauss-Sedel and SOR teratve methods Next week: More MPI Debuggng and totalvew GPU computng Read: Class notes and references

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky Improvng Low Densty Party Check Codes Over the Erasure Channel The Nelder Mead Downhll Smplex Method Scott Stransky Programmng n conjuncton wth: Bors Cukalovc 18.413 Fnal Project Sprng 2004 Page 1 Abstract

More information

More on Sorting: Quick Sort and Heap Sort

More on Sorting: Quick Sort and Heap Sort More on Sortng: Quck Sort and Heap Sort Antono Carzanga Faculty of Informatcs Unversty of Lugano October 12, 2007 c 2006 Antono Carzanga 1 Another dvde-and-conuer sortng algorthm The heap Heap sort Outlne

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Sorting. Sorted Original. index. index

Sorting. Sorted Original. index. index 1 Unt 16 Sortng 2 Sortng Sortng requres us to move data around wthn an array Allows users to see and organze data more effcently Behnd the scenes t allows more effectve searchng of data There are MANY

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

USING GRAPHING SKILLS

USING GRAPHING SKILLS Name: BOLOGY: Date: _ Class: USNG GRAPHNG SKLLS NTRODUCTON: Recorded data can be plotted on a graph. A graph s a pctoral representaton of nformaton recorded n a data table. t s used to show a relatonshp

More information

Dynamic Programming. Example - multi-stage graph. sink. source. Data Structures &Algorithms II

Dynamic Programming. Example - multi-stage graph. sink. source. Data Structures &Algorithms II Dynamc Programmng Example - mult-stage graph 1 source 9 7 3 2 2 3 4 5 7 11 4 11 8 2 2 1 6 7 8 4 6 3 5 6 5 9 10 11 2 4 5 12 snk Data Structures &Algorthms II A labeled, drected graph Vertces can be parttoned

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

AP PHYSICS B 2008 SCORING GUIDELINES

AP PHYSICS B 2008 SCORING GUIDELINES AP PHYSICS B 2008 SCORING GUIDELINES General Notes About 2008 AP Physcs Scorng Gudelnes 1. The solutons contan the most common method of solvng the free-response questons and the allocaton of ponts for

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Lecture 3: Computer Arithmetic: Multiplication and Division

Lecture 3: Computer Arithmetic: Multiplication and Division 8-447 Lecture 3: Computer Arthmetc: Multplcaton and Dvson James C. Hoe Dept of ECE, CMU January 26, 29 S 9 L3- Announcements: Handout survey due Lab partner?? Read P&H Ch 3 Read IEEE 754-985 Handouts:

More information

CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vidyanagar

CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vidyanagar CHARUTAR VIDYA MANDAL S SEMCOM Vallabh Vdyanagar Faculty Name: Am D. Trved Class: SYBCA Subject: US03CBCA03 (Advanced Data & Fle Structure) *UNIT 1 (ARRAYS AND TREES) **INTRODUCTION TO ARRAYS If we want

More information

Conditional Speculative Decimal Addition*

Conditional Speculative Decimal Addition* Condtonal Speculatve Decmal Addton Alvaro Vazquez and Elsardo Antelo Dep. of Electronc and Computer Engneerng Unv. of Santago de Compostela, Span Ths work was supported n part by Xunta de Galca under grant

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

Report on On-line Graph Coloring

Report on On-line Graph Coloring 2003 Fall Semester Comp 670K Onlne Algorthm Report on LO Yuet Me (00086365) cndylo@ust.hk Abstract Onlne algorthm deals wth data that has no future nformaton. Lots of examples demonstrate that onlne algorthm

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Reading. 14. Subdivision curves. Recommended:

Reading. 14. Subdivision curves. Recommended: eadng ecommended: Stollntz, Deose, and Salesn. Wavelets for Computer Graphcs: heory and Applcatons, 996, secton 6.-6., A.5. 4. Subdvson curves Note: there s an error n Stollntz, et al., secton A.5. Equaton

More information

Assembler. Shimon Schocken. Spring Elements of Computing Systems 1 Assembler (Ch. 6) Compiler. abstract interface.

Assembler. Shimon Schocken. Spring Elements of Computing Systems 1 Assembler (Ch. 6) Compiler. abstract interface. IDC Herzlya Shmon Schocken Assembler Shmon Schocken Sprng 2005 Elements of Computng Systems 1 Assembler (Ch. 6) Where we are at: Human Thought Abstract desgn Chapters 9, 12 abstract nterface H.L. Language

More information

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL)

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL) Crcut Analyss I (ENG 405) Chapter Method of Analyss Nodal(KCL) and Mesh(KVL) Nodal Analyss If nstead of focusng on the oltages of the crcut elements, one looks at the oltages at the nodes of the crcut,

More information

Lecture #15 Lecture Notes

Lecture #15 Lecture Notes Lecture #15 Lecture Notes The ocean water column s very much a 3-D spatal entt and we need to represent that structure n an economcal way to deal wth t n calculatons. We wll dscuss one way to do so, emprcal

More information

c 2009 Society for Industrial and Applied Mathematics

c 2009 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 31, No. 3, pp. 1382 1411 c 2009 Socety for Industral and Appled Mathematcs SUPERFAST MULTIFRONTAL METHOD FOR LARGE STRUCTURED LINEAR SYSTEMS OF EQUATIONS JIANLIN XIA, SHIVKUMAR

More information

Notes on Organizing Java Code: Packages, Visibility, and Scope

Notes on Organizing Java Code: Packages, Visibility, and Scope Notes on Organzng Java Code: Packages, Vsblty, and Scope CS 112 Wayne Snyder Java programmng n large measure s a process of defnng enttes (.e., packages, classes, methods, or felds) by name and then usng

More information

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION 24 CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION The present chapter proposes an IPSO approach for multprocessor task schedulng problem wth two classfcatons, namely, statc ndependent tasks and

More information

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp Lfe Tables (Tmes) Summary... 1 Data Input... 2 Analyss Summary... 3 Survval Functon... 5 Log Survval Functon... 6 Cumulatve Hazard Functon... 7 Percentles... 7 Group Comparsons... 8 Summary The Lfe Tables

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information