Run-Time Operator State Spilling for Memory Intensive Long-Running Queries

Size: px
Start display at page:

Download "Run-Time Operator State Spilling for Memory Intensive Long-Running Queries"

Transcription

1 Run-Tme Operator State Spllng for Memory Intensve Long-Runnng Queres Bn Lu, Yal Zhu, and lke A. Rundenstener epartment of Computer Scence, Worcester Polytechnc Insttute Worcester, Massachusetts, USA {bnlu, yalz, ABSTRACT Man memory s a crtcal resource when processng longrunnng queres over data streams wth state ntensve operators. In ths work, we nvestgate state spll strateges that handle run-tme memory shortage when processng such complex queres by selectvely pushng operator states nto dsks. Unlke prevous solutons whch all focus on one sngle operator only, we nstead target queres wth multple state ntensve operators. We observe an nterdependency among multple operators n the query plan when spllng operator states. We llustrate that exstng strateges, whch do not take account of ths nterdependency, become largely neffectve n ths query context. Clearly, a consoldated plan level spll strategy must be devsed to address ths problem. Several data spll strateges are proposed n ths paper to maxmze the run-tme query throughput n memory constraned envronments. The bottom-up state spll strategy s an operator-level strategy that treats all data n one operator state equally. More sophstcated partton-level data spll strateges are then proposed to take dfferent characterstcs of the nput data nto account, ncludng the local output, the global output and the global output wth penalty strateges. All proposed state spll strateges have been mplemented n the -CAP query system. The expermental results confrm the effectveness of our proposed strateges. In partcular, the global output strategy and the global output wth penalty strategy have shown favorable results as compared to the other two more localzed strateges.. INTROUCTION Processng long-runnng queres over real-tme data has ganed great attenton n recent years [, 3, 6, 4]. Unlke statc queres n a tradtonal database system, such query evaluates streamng data that s contnuously arrvng and produces query results n a real tme fashon. The strngent requrement of generatng real tme results demands Ths work was partly supported by the Natonal Scence Foundaton under grants IIS Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. SIGMO 006, June 7 9, 006, Chcago, Illnos, USA. Copyrght 006 ACM /06/0006 $5.00. effcent man memory based query processng. Therefore long-runnng queres, especally complex queres wth multple potentally very large operator states such as multjons [], can be extremely memory ntensve durng ther executon. Memory ntensve queres wth multple stateful operators are for nstance common n data ntegraton or n data warehousng envronments. For example, a real-tme data ntegraton system helps fnancal analysts n makng tmely decsons. At run tme, stock prces, volumes and external revews are contnuously sent to the ntegraton server. The ntegraton server must jon these nput streams as fast as possble to produce early output results to the decson support system. Ths ensures that fnancal analysts can analyze and make nstantaneous decsons based on the most up to date nformaton. When a query system does not have enough resources to keep up wth the query workload at runtme, technques such as load sheddng [9] can be appled to dscard some workload from the system. However, n many cases, longrunnng queres may need to produce complete result sets, even though the query system may not have suffcent resources for the query workload at runtme. As an example, decson support applcatons rely on complete results to eventually apply complex and long-rangng hstorc data analyss,.e., quanttve analyss. Thus, technques such as load sheddng [9] are not applcable for such applcatons. One vable soluton to address the problem of run-tme man memory shortage whle satsfyng the needs of complete query results s to push memory resdent states temporarly nto dsks when memory overflow occurs. Such solutons have been dscussed n XJon [0], Hash-Merge Jon [5] and MJon []. These solutons am to ensure a hgh runtme output rate as well as the completeness of query results for a query that contans a sngle operator. The processng of the dsk resdent states, referred to as state cleanup, s delayed untl a later tme when more resources become avalable. We refer to ths pushng and cleanng process as state spll adaptaton. However, the state spll strateges n the current lterature are all desgned for queres wth one sngle stateful operator only [5, 0, ]. We now pont out that for a query wth multple state ntensve operators, data spllng from one operator can affect other operators n the same ppelne. Such nterdependency among operators n the same dataflow ppelne must be consdered f the goal of the runtme data spllng s to ensure hgh output rate of the whole query plan. Ths poses new challenges on the state spll technques, 347

2 whch the exstng strateges, such as XJon [0] and Hash- Merge Jon [5], cannot cope wth. As an example of the problem consdered, Fgure shows two stateful operators OP and OP j wth the output of OP drectly feedng nto OP j. If we apply the exstng state spll strateges on both operators separately, the nterdependency between the two operators can cause problems not solved by these strateges. Frst, the data spll strateges would am to maxmze the output rate of OP when spllng states from OP. However, ths could n fact backfre snce t would n turn ncrease the man memory consumpton of OP j. Secondly, the states splled n OP may have the potental to have made a hgh contrbuton to the output of OP j.however, snce they are splled n OP, ths may produce the opposte of the ntended effect, that s, t may reduce nstead of ncrease the output rate of OP j. Ths contradcts the goal of the data spll strateges appled on OP j. Maxmze the output of OP? OP OP j Fgure : A Chan of Stateful Operators In ths work, we propose effectve runtme data spll strateges for queres wth multple nter-dependent state ntensve operators. The man research queston addressed n ths work s how to choose whch part of the operator states of a query to spll at run-tme to avod memory overflow whle maxmzng the overall query throughput. Another mportant queston addressed s how to effcently clean-up dsk-resdent data to guarantee completeness of query results. We focus on applcatons that need accurate query results. Thus, all nput tuples have to be processed ether n real tme durng the executon stage or later durng the state clean-up phase. Several data spll strateges are proposed n ths paper. We frst dscuss the bottom-up state spll strategy, whch s a operator-level strategy that treats all data n one operator state equally. We then propose more sophstcated parttonlevel data spll strateges that take dfferent characterstcs of the nput data nto account, ncludng a localzed strategy called local output, and two global throughput-orented state spllng strateges, named global output and global output wth penalty. All proposed data spll strateges am to select approprate portons of the operator states to spll n order to maxmze the run-tme query throughput. We also propose effcent clean-up algorthms to generate the complete query results from the dsk-resdent data. Furthermore, we show how to extend the proposed data spll strateges to apply them n a parallel processng envronment. For long-runnng queres wth hgh stream nput rates and thus a monotonc ncrease of operator states, the state cleanup process may be performed only after the run-tme executon phase fnshes. In ths paper we focus on ths case. For queres wth wndow constrants and bursty nput streams, the n-memory executon and the dsk clean-up may need to be nterleaved at runtme. New ssues n ths scenaro nclude tmng of spll, tmng of clean-up, and selecton of data to clean-up. We plan to address these ssues n our future work. The proposed state spll strateges and clean-up algorthms have all been mplemented n the -CAP contnuous query system [3]. The expermental results confrm the effectveness of our proposed strateges. In partcular, the global output strategy and the global output wth penalty strategy have shown more favorable results as compared to the other two more localzed strateges. The remander of the paper s organzed as follows. Secton dscusses basc concepts that are necessary for later sectons. Secton 3 defnes the problem of throughput-orented data spllng we are addressng n ths paper. The global throughput-orented state spllng strateges are presented and analyzed n Secton 4. Secton 5 dscusses the clean-up algorthms. In Secton 6, we show how to apply the data spllng strateges n a parallel processng envonment. Performance evaluatons are presented n Secton 7. Secton 8 dscusses related work and we conclude n Secton 9.. PRLIMINARIS. State Parttons and Partton Groups Operators n a contnuous long-runnng queres are requred to be non-blockng. Thus many operators need states. For example, a jon operator needs states to store tuples that have been processed so far so to jon them wth future ncomng tuples from the other streams. In case of hgh stream arrval rates and long-runnng tme, the states n an operator can become huge. Spllng one of these large states n ts entrety to dsk at tmes of memory overflow can be rather neffcent, and possbly even not necessary. In many cases, we need the flexblty to choose to spll part of a state or choose to spll data from several states to dsk to temporarly reduce the query workload n terms of memory. To facltate ths flexblty n run tme adaptaton, we can dvde each nput stream nto a large number of parttons. Ths enables us to effectvely spll some parttons n a state wthout affectng other parttons n the same state or parttons n other operator states. Ths method has frst been found to be effectve n the early data skew handlng lterature, such as [9], as well as n recent work on parttoned contnuous query processng, such as Flux [8]. By usng the above stream parttonng method, we can organze operator states based on the nput parttons. ach nput partton s dentfed by a unque partton I. Thus each tuple wthn an operator state belongs to exactly one of these nput parttons and would be assocated wth that partcular partton I. For smplcty, we also use the term partton to refer the correspondng operator state partton. The nput streams should be parttoned such that each query result can be generated from tuples wthn the same partton,.e., wth the same partton I.Inthsway,we can smply choose approprate parttons to spll at run tme, whle avodng reparttonng durng ths adaptaton process. Fgure depcts the stream parttonng for a jon query (A B C).ThejonsdefnedasA.A =B.B =C.C where A, B, and C denote nput streams (jon relatons) and A, B,andC are the correspondng jon columns. Here, the Splt A operator parttons the stream A based on the For m-way jons (m > ) [] wth jon condtons defned on dfferent columns, more data structures are requred to support ths parttoned m-way jon processng. The dscusson of ths s out of the scope of the paper snce we focus on the aspect of run-tme state adaptaton n ths work. 348

3 value of column A, whle the Splt B operator parttons the stream B based on B, and so on. As we can see, n order to generate a fnal query result, tuples from stream A wth partton I only need to jon wth tuples wth the same partton I from streams B and C. P A I A Splt A A I P B B B.. Splt B I C P C Splt C Fgure : xample of Parttoned Inputs When spllng operator states, we could choose parttons from each nput separately, as shown n Fgure 3(a). Usng ths strategy requres us to keep track of the tmestamps of when each of these parttons was splled to dsk, and the tmestamps of each tuple n order to avod duplcates or mssng results n the cleanup process. For example, partton A has been splled to the dsk at tme t. WeuseA to denote ths part of the partton A. All the tuples from B and C wth a tmestamp greater than t have to eventually jon wth the A n the cleanup process. Snce A, B,and C could be splled more than one tme, the cleanup needs to be carefully synchronzed wth the tmestamps of the nput tuples and the tmestamps of the parttons beng splled. An alternatve strategy s to use a partton group as the smallest unt of adaptaton. As llustrated n Fgure 3(b), a partton group contans parttons wth the same partton I from all nputs. urng our research, we found that usng the granularty of a partton group can smplfy the cleanup process (descrbed n Secton 4). Therefore, n our work we choose to use the noton of a partton group as the smallest unt to spll to dsk. From now on, we use the term partton to refer to a partton group f the context s clear. Snce a query plan can contan multple jons, the partton groups here are defned for each ndvdual operator n the plan. fferent operators may generate a tuple s partton I based on dfferent columns of that tuple. Ths arses when the jon predcates are non-transtve. Therefore a tuple may hold dfferent partton Is n dfferent operators. A B C (a) Select parttons from one ndvdual nput C.. A B C (b) Select parttons from all nputs wth the same I Fgure 3: Composng Partton Groups As an addtonal bonus, the approach of parttonng nput streams (operator states) naturally facltates effcent parttoned parallel query processng [0, 8]. That s, we can send non-overlappng parttons to multple machnes and have the query processed n parallel. The query processng can then proceed respectvely on each machne. Ths wll be further dscussed n Secton 5.. Calculatng State Sze Servng as the bass for the followng sectons, we now descrbe how to calculate the operator state sze and the state sze of the query tree. The operator state sze can be estmated based on the average sze of each tuple and the total number of tuples n the operator. The total state sze of the query tree s equal to the sum of all the operator state szes. For example, the state sze of Jon (see Fgure 4) can be estmated by S = u a s a+u b s b +u c s c. Here, s a, s b,and s c denote the number of tuples n Jon from nput stream A, B and C respectvely, and u a, u b,andu c represent the average szes of nput tuples from the correspondng nput streams. In Fgure 4, I and I denote the ntermedate results from Jon and Jon respectvely. Note that the average tuple sze of I can be represented by u a + u b + u c, whle the average tuple sze of I can be denoted by u a + u b + u c + u d f no projecton s appled n the query plan. Ths smple model can be naturally extended to stuatons when projectons do exst. The sze of operator states to be splled durng the spll process can be computed n a smlar manner. For example, assume d a tuples from A, d b tuples from B, and d c tuples from C are to be splled. Then, the splled state sze can be represented by = u a d a + u b d b + u c d c. (u a +u b +u c + u d ) splled state sze (u a +u b +u c ) =u a *d a +u b *d b +u c *d c Jon Jon S =u a *s a +u b *s b +u c *s c overall state sze ua u b u c I Jon 3 Fgure 4: Unt Sze of ach Stateful Operator Thus, the total percentage of states splled for the query tree can be computed by the sum of state szes beng splled dvded by the total state sze. For the query tree depcted n Fgure 4, t s denoted by ( + + 3)/(S + S + S 3). Here S represents the total state sze of operator Jon, whle denotes the operator states beng splled from Jon ( 3). 3. THROUGHPUT-ORINT STAT SPILL STRATGIS As dscussed n Secton, our goal s to keep the runtme throughput of the query plan as hgh as possble whle at the same tme preventng the system from memory overflow by applyng runtme data spllng when necessary. Gven multple stateful operators n a query tree, parttons from I u d u e 349

4 all operators can be consdered as potental canddates to be pushed when man memory overflows. We now dscuss varous strateges to choose partton groups to spll from multple stateful operators. State spll strateges have been nvestgated n the lterature [5, 0, ] to choose parttons from one sngle stateful operator to spll to dsk wth the least effect on the overall throughput. However, as dscussed n Secton, the exstng strateges are not suffcent to apply on a query tree wth multple stateful operators, because they do not consder the nterdependences among a chan of stateful operators n a dataflow ppelne. As we wll llustrate below, a drect extenson of the exstng strateges for one sngle operator does not perform well when appled to multple stateful operators. The decson of fndng parttons to spll can be done at the operator-level or at the partton-level. Selectng parttons at the operator-level means that we frst choose whch operators to spll parttons from and then start to spll parttons from ths operator untl the desred amount of data s pushed to dsk. If the sze of the chosen operator state s smaller than the desred spll amount, we would choose the next operator to spll parttons from. In other words, by usng the operator-level state spll, all parttons nsde one operator state are treated unformly and have equal chances of beng splled to dsk. The state spllng can also be done at the partton-level, whch treats each partton as an ndvdual unt and globally choose whch parttons to spll wthout consderng whch operators these parttons belong to. In ths secton, we present varous state spll strateges at both the operator-level and the partton-level. We frst nvestgate the mpact of pushng operator states to dsk n a chan of operators. Fgure 5 llustrates an example of an operator chan. ach operator n the chan represents a state ntensve operator n a query tree. Note that t does not have to be a sngle nput operator as depcted n the fgure. s represents the correspondng selectvtes of operator OP ( n). to fnal output s s s 3 s n OP OP OP 3 OP n I = n [( s j t)] () = j= More precsely, OP stores t tuples, OP stores t s tuples, OP 3 stores t s s tuples, and so on. Thus, f we spll t tuples at OP, then all the correspondng ntermedate results generated due to the exstence of these t tuples and would have been stored n OP, OP 3,, OP n now would not exst any more. Note that spllng any of these ntermedate results would have the same overall effect on the fnal output,.e., spllng the t s tuples at OP would decrease the same amount of the fnal output as spllng t tuples at operator OP, as estmated by the quaton. 3. Operator-Level State Spll 3.. Bottom-up Pushng Strategy Inspred by the above analyss, we now propose a nave strategy, referred to as bottom-up pushng, to spll operator states of a query tree wth multple stateful operators at the operator-level. Ths strategy always chooses operator states from the bottom operator(s) n the query tree untl enough space has been saved n the memory. For example, n Fgure 5, the bottom operator s OP. Partton groups from bottom operators are chosen randomly and have equal chances to be chosen. Intutvely, f partton groups from the bottom operator are chosen to be pushed to dsk, less ntermedate results would be stored n the query tree, compared to pushng states n the other operators. Thus, the bottom-up pushng strategy has the potental to lead to a smaller number of state spll processes, because less states (ntermedate results) are expected to be accumulated n the query tree. However, havng a smaller number of state spll processes does not naturally result n a hgh overall throughput. Ths s because () the states beng pushed n the bottom operator may contrbute to a hgh output rate n ts down stream operators, and () the cost of each state spll process may not be hgh, thus havng a large number of state spll processes may not ncur sgnfcant overhead on the query processng. Intermedate states Fgure 5: An Operator Chan t p t p t p t p p n t p n OP OP OP n For such an operator chan, quaton estmates the possble number of output tuples from OP n gven a set of nput tuples t to OP. n u = s t () = The total number of tuples that wll be stored somewhere wthn ths chan due to these t nput tuples, whch also corresponds to the ncrease n the operator state sze, can be computed as follows : We assume that all nput tuples to stateful jon operators have to be stored n operator states. In prncple, other stateful operators can be addressed n a smlar manner. Fgure 6: A Chan of Parttoned Operators Moreover, the output of a partcular partton of the bottom operator s lkely to be sent nto multple dfferent parttons of the down stream operator(s). For example, as llustrated n Fgure 6, assume the t nput tuples to OP are parttoned nto partton group P. Here the superscrpt represents the operator I, whle the subscrpt denotes the partton I. After the processng n OP, t result tuples are outputted and parttoned nto P of OP, whle t tuples are parttoned nto P of OP. The parttons P and P of OP may have very dfferent selectvtes. For example, the output t may be much larger than t whle the 350

5 sze of these two parttons may be smlar. Thus, t may be worthwhle to keep P n OP even though certan states (n P of OP ) wll be accumulated at the same tme. 3.. scussons On Operator-Level State Spll As we can see, the relatonshp between parttons among adjacent operators s a many-to-many relatonshp. Pushng partton groups at any operator other than the root operators may affect multple partton groups at ts down stream operators. However, an operator-level strategy, such as the presented bottom-up strategy, does not have a clear connecton between the partton pushng and ts effects on the overall throughput. Another general drawback of the operator-level spllng s that t treats all parttons n the same state as havng the same characterstcs and the same effects on query performance when consder data spllng. However, dfferent parttons may have dfferent effects on the memory consumpton and the query throughput after the data spllng. For example, some tuples have data values that appear more often n the stream, so they may have hgher chances to jons wth other tuples and produce more results. Thus we may need to make decsons on where to spll data on a fner granularty. 3. Partton-Level State Spll To desgn a better state spllng strategy, we propose to globally select partton groups n the query tree as canddates to push. Fgure 7 llustrates the basc dea of ths approach. Instead of pushng parttons from partcular operator(s) only, we conceptually vew parttons from dfferent operators at the same level. That s, we choose parttons globally at the query level based on certan cost statstcs collected about each partton. The basc statstcs we collect for each partton group are P output and P sze. P output ndcates the total number of tuples that have been output from the partton group, and P sze refers to the operator state sze of the partton group. These two values together can be utlzed to dentfy the productvty of the partton group. We now descrbe three dfferent strateges for how to collect P output and P sze values of each partton group, and how partton groups can be chosen based on these values wth the most postve mpact on the run tme throughput. sk State Spll Jon Jon Jon 3 Fgure 7: Globally Choose Partton Groups 3.. Local Output Strategy The frst proposed partton-level state spll strategy, referred to as local output, updatesp output and P sze values of each partton group locally at each operator. The P sze of each partton group s updated whenever the nput tuples are nserted nto the partton group. Whle P output value s updated whenever output tuples are generated from the operator. Fgure 8 llustrates ths localzed approach. When t tuples nput to Jon,weupdateP sze of the correspondng partton groups n Jon. When t tuples are generated from Jon,thenP output value of the correspondng partton groups n Jon and the P sze value of related partton groups n Jon are updated. Smlarly, f we get t from Jon,thenP output of correspondng partton groups n Jon and P sze n Jon 3 are updated. Poutput, Psze Poutput, Psze t Jon t Jon t 3 t Jon 3 Fgure 8: A Localzed Statstcs Approach fferent from the prevous operator-level state spll, when selectng parttons to spll, ths strategy chooses from the set of all parttons across all operators n the query plan basedontherproductvty values (P output/p sze). Hence ths s a partton-level state spll strategy. We push the partton group wth the smallest productvty value among all partton groups n the query plan. However, ths approach does not provde a global productvty vew of the partton groups. For example, f we keep partton groups of Jon wth hgh productvty values n man memory, ths n turn would contrbute to generatng more output tuples to be nput to Jon. All these tuples wll be stored n Jon and hence wll ncrease the man memory consumpton of Jon. Ths may cause the man memory to be flled up quckly. However, these ntermedate results may not necessarly help the overall throughput snce these results may be dropped by ts down-stream operators. 3.. Global Output Strategy In order to maxmze the run-tme throughput after pushng states nto dsks, we need to have a global vew of partton groups that reflects how each partton group contrbutes to the fnal output. That s, the productvty value of each partton group needs to be defned n terms of the whole query tree. Ths requres the P output value of each partton group to represent the number of fnal output tuples generated from the query. The productvty value, P output/p sze, now ndcates how good the partton group s n terms of contrbutng to the fnal output of the query. Thus, f we keep the partton groups wth hgh global productvty value n man 35

6 memory, the overall throughput of the query tree s lkely to be hgh compared wth the prevously descrbed pushng strateges. Note that the key dfference of ths global output approach from the local output approach s ts new way of computng the P output value. We have desgned a tracng algorthm that computes the P output value of each partton group. The basc dea s that whenever output tuples are generated from the query tree, we fgure out the lneage of each output tuple. That s, we trace back to the correspondng partton groups from dfferent operators that have contrbuted to ths output. The tracng of the partton groups that contrbute to an output tuple can be computed by applyng the correspondng splt operators. Ths s feasble snce we can apply the splt functons on the output tuple along the query tree to dentfy all the partton groups that the output tuple belongs to. Such tracng requres that the output tuple contans at least all jon columns of the jon operators n the query tree. The man dea of the tracng algorthm s depcted n Fgure 9. When k tuples are generated from Jon 3,wedrectly update the P output values of partton groups n Jon 3 that produce these outputs. To fnd out the partton groups n the Jon that contrbute to the outputs, we apply the partton functon of Splt on each output tuple. Snce multple partton groups n the Jon may contrbute to one partton group n Jon 3, we need to trace for each partton group that s found n Jon. Smlarly, we apply the partton functon of Splt to fnd the correspondng partton groups n operator Jon. Note that we do not have to trace and update P output for each output tuple. We only update the value wth a random sample of the output tuples. The pseudocode for the tracng algorthm for a chan of operators s gven n Algorthm. Here, we assume that each stateful operator n the query tree keeps reference to ts mmedate upstream stateful operator and reference to ts mmedate upstream splt operator. Upstream operator of an operator op here s defned as the operators that feed ther output tuples as nputs to the operator op. Note that for a query tree, multple mmedate upstream stateful operators may exst for one operator. We can then smlarly update the tracng algorthm to use a breadth-frst or depth-frst traversal algorthms of the query plan tree to update the P output values of the correspondng parttons. Algorthm updatestatstcs(tpset) /*Tracng and updatng the P output values for a gven set of output tuples tpset.*/ : op root operator of the query tree; : prv op ref op.getupstreamoperatorreference(); 3: prv splt ref op.getupstreamspltreferences(); 4: whle ((prv op ref null) &&(prv splt ref null)) do 5: for each tuple tp tpset do 6: cp I Compute parttoni of tp n prv op ref ; 7: Update P output of partton group wth I cp I; 8: end for 9: prv op ref prv op ref.getupstreamoperatorreference(); 0: prv splt ref prv splt ref.getupstreamspltreference(); : end whle Gven the above tracng, the P output value of each partton group ndcates the total number of outputs that have been generated that have ths partton group nvolved n. TheupdateofP sze value s the same as we have dscussed n the local output approach. Thus, P output/p sze ndcates the global productvty of the partton group. By pushng partton groups wth a lower global productvty, the overall run-tme throughput would be expected to be better than the localzed approach as well as the bottom-up approach. Jon Splt Jon Splt A Splt B Splt C Splt k Jon 3 Splt Splt Fgure 9: Tracng the Output Tuples 3..3 Global Output wth Penalty Strategy In the above approaches, the sze of the partton group P sze reflects the man memory usage of the current partton group. However, as prevously ponted out, the operators n a query tree are not ndependent. That s, output tuples of an up stream operator have to be stored n the down stream stateful operators. Ths ndrectly affects the P sze of the correspondng partton groups n the down stream operator. P : P sze = 0, P output =0 P : P sze = 0, P output =0 p p OP 0 p p j OP Fgure 0: Impact of the Intermedate Results For example, as shown n Fgure 0, both partton groups P and P of OP have the same P sze and P output values. Thus, these two parttons have the same productvty value n the global output approach. However, P produces tuples on average that are output to the OP gven one nput tuple, whle P generates 0 tuples on average gven one nput tuple. All ntermedate results have to be stored n the down stream stateful operators. Thus, pushng P nstead of P can help to reduce the memory that wll be needed to store possble ntermedate results n downstream operators. 35

7 To capture ths effect, we defne an ntermedate result factor n each partton group, denoted by P nter. Thsfactor ndcates the possble ntermedate results that wll be stored n ts down stream operators n the query tree. In ths strategy, the productvty value of each partton group s defned as P output/(p sze + P nter). Ths ntermedate result factor can be computed smlarly as the tracng of the fnal output. That s, whenever an ntermedate result s generated, we update the P nter values of the correspondng partton groups n all the upstream operators. Fgure llustrates an example of how tracng algorthm can be utlzed to update P nter. Inthsexample, one nput tuple to OP eventually generates output tuples from OP 4. The number n the square box represents the number of ntermedate results beng generated. partton groups wth I A B C A B C A r B r C r partton groups wth I A B C A s B s C s partton groups wth I n A m B m C m A t m B t m C t m merge merge merge A ~r B ~r C ~r A ~s B ~s C ~s A ~t m B ~t m C ~t m p p OP p p j OP 3 4 p 3 p 3 j OP 3 4 p 4 p 4 j Fgure : Tracng and Updatng P nter Values 4. CLAN UP ISK RSINT PARTITIONS 4. Clean Up of One Stateful Operator When memory becomes avalable, dsk resdent states have to be brought back to man memory to produce mssng results. Ths state cleanup process can be performed at any tme when memory becomes avalable durng the executon. If no new resources are beng devoted to the computaton, then ths cleanup process may lkely occur at the end of the run-tme phase. In the cleanup, we must produce all mssng results due to spllng data to dsk whle preventng duplcates. Note that multple partton groups may exst n dsk for one partton I. Ths s because once a partton group has been pushed nto dsk, new tuples wth the same partton I may agan accumulate and thus a new partton group forms n man memory. Later, as needed, ths partton group could be pushed nto the dsk agan. The tasks that need to be performed n the cleanup can be descrbed as follows: () Organze the dsk resdent partton groups based on ther partton I. () Merge partton groups wth the same partton I and generate mssng results. (3) If a man memory resdent partton group wth the same I exsts, then merge ths memory resdent part wth the dsk resdent ones. Fgure llustrates an example of the partton groups before and after the cleanup process. Here, the example query s defned as A B C. We use a subscrpt to ndcate the partton I, whle we use a superscrpt to dstngush between the partton groups wth the same partton I that have been pushed at dfferent tmes. The collecton of superscrpts such as r represents the merge of partton groups that respectvely had been pushed at tmes,,,r. OP 4 Fgure : xample of Cleanup Process The merge of partton groups wth the same I can be descrbed as follows. For example, assume that a partton group wth partton I has been pushed k tmes to dsk, represented as (A,B,C ), (A,B,C ),,(A k,b k,c k ) respectvely. Here (A j,bj,cj ), j k denotes the j-th tme that the partton group wth I has been pushed nto the dsk. For ease of descrpton, we denote these partton groups by P,P,,P k respectvely. ue to our usage of the dea of spllng at the granularty of complete partton groups (see Secton.), the results generated between all the members of each partton group have already been produced durng the prevous run-tme executon phase. In other words, all the results such as A B C, A B C,, A k B k C k are guaranteed to have been prevously generated. For smplcty, we denote these results as V, V,, V k. These partton groups can thus be consdered to be self-contaned partton groups gven the fact that all the results have been generated from the operator states that are ncluded n the partton group. Mergng two partton groups wth the same partton I results n a combned partton group that then contans unon of the operator states from both partton groups. For example, the merge of P and P results n a new partton group P, now contanng the operator states A A,B B,C C. Note that the output V, from partton group P, should be (A A ) (B B ) (C C ). Clearly, a subset of these output tuples have already been generated, namely, V and V. Thus now we must generate the mssng part n the mergng process for these two partton groups n order to make the resultng partton group P, self-contaned. Ths mssng part s ΔV, = V, V V. Here, we observe that the problem of mergng partton groups and producng mssng results s smlar to the problem of the ncremental batch vew mantenance [, 6]. We thus now descrbe the algorthm for ncremental batch vew mantenance and then show how to map our problem to the vew mantenance problem so to apply exstng solutons from the lterature to our problem [, 6]. Assume a materalzed vew V s defned as an n-way jon upon n dstrbuted data sources. It s denoted by R R R n. There are n source deltas (ΔR, n) 353

8 that need to be mantaned. As was mentoned earler, each ΔR denotes the changes (the collecton of nsert and delete tuples) on R at a logcal level. An actual mantenance query wll be ssued separately, that s, one for nsert tuples and one for delete tuples. Gven the above notatons, the batch vew mantenance process s depcted n quaton 3. ΔV = ΔR R R 3 R n + R ΔR R 3 R n (3) + + R R R 3 ΔR n Here R refers to the orgnal data source state wthout any changes from ΔR ncorporated n t yet, whle R represents the state after the ΔR has been ncorporated,.e., t reflects R +ΔR ( + denotes the unon operaton). The dscusson of the correctness of ths batch vew mantenance tself can be found n [, 6]. Intutvely, we can treat one partton group as the base state, whle the other as the ncremental changes. Thus, the mantenance equaton descrbed n quaton 3 can be naturally appled to merge parttons and recompute mssng results. Lemma 4.. A combned partton group P r,s generated by mergng partton groups P r and P s usng the ncremental batch vew mantenance algorthm as lsted n quaton 3 s self-contaned f P r and P s were both self-contaned before the merge. Proof. Wthout loss of generalty, we treat partton group P r as the base state, whle P s as the ncremental change to P r. Incremental batch vew mantenance equaton as descrbed n quaton 3 produces the followng two results: () the partton group P r,s havng both states of P r and P s, and () the ncremental changes to the base result V r by ΔV r,s = V r,s - V r. Snce two partton groups P r and P s already have results V r and V s generated, the mssng result of combnng P r and P s can be generated by ΔV r,s - V s. As can be seen, P r,s s self-contaned snce t has generated exactly the output results V r,s V r + V s ). =(ΔV r,s - V s )+( As an example, let us assume A, B and C are the base states, whle A, B and C are the ncremental changes. Then, by evaluatng the vew mantenance equaton n quaton 4, we get the combned partton group P, and the delta change ΔV, = V, V. By further removng V from ΔV,, we generate exactly the mssng results by combnng P and P. The frst combnaton merges two partton groups, whle the remanng m- partton groups are combned one at a tme. Thus the combnaton ends after m- steps. Gven each combnaton results n a self-contaned partton group based on Lemma 4., the fnal partton group s self-contaned. BasedonLemmas4.and4.,wecanseethatthecleanup process (mergng partton groups wth the same partton I) successfully produces exactly all mssng results and no duplcates. Note that memory resdent partton groups can be combned wth the dsk resdent parts n exactly the same manner as dscussed above. As can be seen, the cleanup process does not rely on any tmestamps. We thus do not have to keep track of any tmestamps durng the state spll process. 4. Clean Up of Multple Stateful Operators Gven a query tree wth multple stateful operators, when operator states from any of the stateful operators have been pushed nto the dsk durng run-tme, the fnal cleanup stage to completely remove all persstent data should not be performed n a random order. Ths s because the operator has to ncorporate the mssng results generated from the cleanup process of any of ts up stream operators. That s, the cleanup process of jon operators has to conform to the partal order as defned n the query tree. Fgure 3 llustrates a 5-jon query tree ((A B C) ) wth three jon operators Jon, Jon,andJon 3. Assume we have operator states pushed nto the dsk from all three operators. The correspondng jon results from these dsk resdent states are denoted by ΔI,ΔI,andΔI 3.From Fgure 3, we can see that the cleanup results of Jon (ΔI ) have to be joned wth the complete operator states related to stream to produce the complete cleanup results for Jon. Here, the complete stream state ncludes states from the dsk resdent part ΔI and the correspondng man memory operator state. The cleanup result of Jon,(ΔI +ΔI ), has to jon wth the complete stream state n Jon 3 to produce the mssng results. Clean up ΔI ΔI Splled States ΔI 3 Jon Jon Jon 3 V, V = A B C (A A ) B C (A A ) (B B ) C (4) Fgure 3: Clean Up the Operator Tree Lemma 4.. Gven a collecton of self-contaned partton groups {P, P,, P m }, a self-contaned partton group P m can be constructed usng the above gven ncremental vew mantenance algorthm repeatedly n m steps. Proof. A straghtforward teratve process can be appled to combne such a collecton of m partton groups. Gven ths constrant, we desgn a synchronzed cleanup process to combne dsk resdent states and produce all mssng results. We start the cleanup from the bottom operators(s) whch are the furthest from the root operator,.e., from all the leaves. The cleanup process for operators wth the same dstance from the root can be processed concurrently. Once an up stream operator completes ts cleanup 354

9 process, t notfes ts down stream operator usng a control message nterleaved n the data stream to sgnal that no more ntermedate tuples wll be sent to ts down stream operators hereafter. Ths message then trggers the cleanup process of the down stream operator. Once the cleanup process of an operator s completed, the operator wll no longer be scheduled by the query engne untl the full cleanup s accomplshed. Ths synchronzed cleanup process s llustrated n Fgure 3. The cleanup process starts from Jon. The generated mssng results ΔI are sent to the down stream operators. Jon then generates a specal control tuple ndof-cleanup to ndcate the end of ts cleanup. The down stream stateful operator Jon starts ts cleanup after recevng the control tuple. All the other non-stateful operators, such as splt operators, smply pass the nd-of-cleanup tuple through to ther down stream operator(s). Ths process contnues untl all cleanup processes have been processed. Note that n prncple t s possble to start the cleanup process of all stateful operators at the same tme. However, ths may requre a large amount of man memory space snce each cleanup process wll brng dsk resdent states nto the memory. On the other hand, the operator states of the down stream operators cannot be released n any case untl ts up stream operators fnsh ther cleanup and compute the mssng results. Whle for the synchronzed method, we nstead brng these dsk resdent states nto memory sequentally one operator at a tme. Furthermore, we can safely dscard them once the cleanup process of ths operator completes. 5. APPLYING TO PARTITION PARALLL QURY PROCSSING A query system that processes long-runnng queres over data streams can easly run out of resources when processng large volume of nput stream data. Parallel query processng over a shared nothng archtecture,.e., a cluster of machnes, has been recognzed as a scalable method to solve ths problem [, 8, 8]. Parallel query processng can be especally useful for queres wth multple state ntensve operators that are resource demandng n nature. Ths s exactly the type of queres we are focusng on n ths work. However, the overall resources of even a dstrbuted system may stll be lmtng. A parallel processng system may stll need to temporarly spll state parttons to dsk to react to overall resource shortage mmedately. In ths secton, we llustrate that our proposed state spll strateges natually can be extended to also work for such parttoned parallel query processng envronment. Ths observaton broadens the applcablty of our proposed spll technques. The approach of parttonng nput streams (operator states) dscussed n Secton. s stll applcable n the context of parallel query processng. In fact, t helps to acheve a parttoned parallel query processng [7,, 7]. We can smply spread the stream parttons across dfferent machnes wth each machne only processng a porton of all nputs. Fgure 4 depcts an example of processng a query plan wth two jons n a parallel processng envronment. Frst, stateful operators must be dstrbuted across avalable machnes. In ths work, we choose to allocate all stateful operators n the query tree to all the machnes n the cluster, as shown n Fgure 4(b). Thus, each machne wll have exactly the same number of stateful operators defned n the query tree actvated. ach machne processes a porton of all nput streams of the stateful operators. The parttoned stateful operators can be connected by splt operators as shown n Fgure 4(c). One splt operator s nserted after each nstance of the stateful operator. The output of the operator nstance s drectly parttoned by the splt operator and then shpped to the next approprate down stream operators. Note that other approaches exst for both allocatng stateful operators across multple machnes and connectng such parttoned query plans. However, the man focus of the work here s to adapt operator states to address the problem of run tme man memory shortage. The exploraton of other parttoned parallel processng approaches as well as ther performance are beyond the scope of ths paper. Jon Jon (a) Orgnal Query cluster m m m 3 m 4 (b) Allocatng Multple Stateful Operators Splt Jon Jon Jon Splt Splt Jon Splt A Splt B Splt C (c) Composng Parttoned Query Plan Fgure 4: Parttoned Parallel Processng The throughput-orented state spll strateges dscussed n Secton 3 naturally apply to the parttoned parallel processng envronments. Ths s because the statstcs we collect are based on man memory usage and operator states only. However, gven parttoned parallel processng, when applyng the global output or the global output wth penalty state spll strategy, the P output value must be traced and then correctly updated across multple machnes. For example, as shown n Fgure 5, the query plan s deployed n two machnes. If k tuples are generated by Jon 3,we drectly update the P output values of partton groups n Jon 3 that have produced these outputs. To fnd out the partton groups n Jon that contrbute to the outputs, we then apply the partton functon of Splt on each output tuple. Note that gven parttoned parallel processng, partton groups from dfferent machnes may contrbute to the same partton group of the down stream operator. Thus, the tracng and updatng of P output values may nvolve multple machnes. In ths work, we desgn an UpdatePartton- Statstcs message to notfy other machnes of the update of P nter and P output values. Snce each splt operator knows exactly the mappngs between the partton groups and the machnes, t s feasble to only send the message to the machnes that have the partton groups to be updated. The revsed updatestatstcs algorthm s sketched n Algorthm. We classfy partton group Is by applyng the current splt functon nto localis and remoteis dependng on whether the I s mapped to the current machne. Then for the partton groups wth localis, weupdateether P nter or P output based on whether the current tpset s a set of ntermedate results. Whle for the remoteis, we compose UpdateParttonStatstcs messages wth approprate nformaton and then send the messages to the machne that holds the partton groups wth ther Is n remoteis. 355

10 Machne Jon Splt Jon Splt Splt A Splt B Splt C k Jon 3 Splt Splt Machne Jon Splt Jon Splt A Splt B Splt C Splt Jon 3 Splt Splt Fgure 5: Tracng the Number of Output Algorthm updatestatstcsrev(tpset,ntermedate) /*Tracng and updatng the P output/p nter values for a gven set of output tuples tpset. ntermedate s a boolean ndcatng whether tpset s the ntermedate results of the query tree*/ : op root operator of the query plan; : prv op ref op.getupstreamoperatorreference(); 3: prv splt ref ths.getupstreamspltreference(); 4: whle ((prv op ref null) &&(prv splt ref null)) do 5: for each tp tpset do 6: cp I Compute parttoni of tp n prv op ref ; 7: Classfy cp I nto localis/remoteis; 8: end for 9: f (ntermedate) then 0: Update P nter of localis; : else : Update P output of localis; 3: end f 4: Compose & send UpdateParttonStatstcs msg(s) for remoteis; 5: prv splt ref prv splt ref.getupstreamspltreference(); 6: prv op ref prv op ref.getupstreamoperatorreference(); 7: end whle ng about half of all nput parttons. ach machne has dual.4hz Xeon CPUs wth G man memory. All nput streams are parttoned nto 300 parttons. We set the memory threshold (θ m) for state spllng to be 60 MB for each machne. Ths means the system starts spllng states to dsk when the memory usage of the system s over 60 MB. We vary two factors, namely the tuple range and the range jon rato, when generatng nput streams. We specfy that a data value V appears R tmes for every K nput tuples. Here K s defned as the tuple range and R the range jon rato for V. fferent values (parttons) n each jon operator can have dfferent range jon ratos. The average of these ratos s defned as the average jon rato for that operator. 6. xpermental valuaton Fgure 6 compares the run-tme phase throughput of dfferent state spllng strateges. Here we set the average jon rato of Jon to 3, whle the average jon rato of Jon and Jon 3 s. In Fgure 6, the X-axs represents tme, whle the Y-axs denotes the overall run tme throughput. From Fgure 6, we can see that both the local output approach and the bottom-up approach perform much worse than the global output and the global output wth penalty approaches. Ths s as expected because the local output and the bottom-up approaches do not consder the productvty of partton groups at the global level. From Fgure 6, we also see that the global output wth penalty approach performs even better than the global output approach. Ths s because the global output wth penalty approach s able to effcently use the man memory resource by consderng both the partton group sze as well as the possble ntermedate results that have to be stored n the query tree. Throughput Global Output wth Penalty Global Output Local Output Bottom-Up Mnutes 6. PRFORMANC STUIS 6. xpermental Setup All state spllng strateges dscussed n ths paper have been mplemented n the -CAP system, a prototype contnuous query system [3]. We use a fve-jon query tree llustrated n Fgure 5 to report our expermental results. The query s defned on 5 nput streams denoted as A, B, C,, and wth each nput stream havng two columns. Here Jon s defned on the frst column of each nput stream A, B, and C. Jon s defned on the frst jon column of nput and the second jon column of nput C, whle Jon 3 s defned on the frst column of nput and the second column of nput. The average tuple nterarrval tme s set to be 50 ms for each nput stream. All jons utlze the symmetrc hash-jon algorthm []. We deploy the query on two machnes wth each process- Fgure 6: Comparng Run-tme Throughput wth Jon Rato 3,, Fgures 7 and 8 show the correspondng memory usage when applyng dfferent spllng strateges. Fgure 7 shows the memory usage of the global output approach and global output wth penalty approach. Note that each zg n the lnes ndcates one state spll process. From Fgure 7, we can see that the global output approach has a total of 3 state spll processes n the 50 mnutes runnng, whle the global output wth penalty approach only splls for 0 tmes. Agan, ths s expected snce the global output wth penalty approach consders both the sze of the partton group and the overall memory mpact on the query tree. As dscussed n Secton 3, havng a smaller number of state spll processes does not mply a hgh overall run tme throughput. In Fgure 8, the bottom-up approach only has 7 tmes of adaptatons. However, the run tme throughput of the bottom-up approach s much less than the global output 356

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.

Complex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following. Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal

More information

SAO: A Stream Index for Answering Linear Optimization Queries

SAO: A Stream Index for Answering Linear Optimization Queries SAO: A Stream Index for Answerng near Optmzaton Queres Gang uo Kun-ung Wu Phlp S. Yu IBM T.J. Watson Research Center {luog, klwu, psyu}@us.bm.com Abstract near optmzaton queres retreve the top-k tuples

More information

CE 221 Data Structures and Algorithms

CE 221 Data Structures and Algorithms CE 1 ata Structures and Algorthms Chapter 4: Trees BST Text: Read Wess, 4.3 Izmr Unversty of Economcs 1 The Search Tree AT Bnary Search Trees An mportant applcaton of bnary trees s n searchng. Let us assume

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Adaptive Load Shedding for Windowed Stream Joins

Adaptive Load Shedding for Windowed Stream Joins Adaptve Load Sheddng for Wndowed Stream Jons Bu gra Gedk College of Computng, GaTech bgedk@cc.gatech.edu Kun-Lung Wu, Phlp Yu T.J. Watson Research, IBM {klwu,psyu}@us.bm.com Lng Lu College of Computng,

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL)

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL) Crcut Analyss I (ENG 405) Chapter Method of Analyss Nodal(KCL) and Mesh(KVL) Nodal Analyss If nstead of focusng on the oltages of the crcut elements, one looks at the oltages at the nodes of the crcut,

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) , VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

Can We Beat the Prefix Filtering? An Adaptive Framework for Similarity Join and Search

Can We Beat the Prefix Filtering? An Adaptive Framework for Similarity Join and Search Can We Beat the Prefx Flterng? An Adaptve Framework for Smlarty Jon and Search Jannan Wang Guolang L Janhua Feng Department of Computer Scence and Technology, Tsnghua Natonal Laboratory for Informaton

More information

Adaptive Load Shedding for Windowed Stream Joins

Adaptive Load Shedding for Windowed Stream Joins Adaptve Load Sheddng for Wndowed Stream Jons Buğra Gedk, Kun-Lung Wu, Phlp S. Yu, Lng Lu College of Computng, Georga Tech Atlanta GA 333 {bgedk,lnglu}@cc.gatech.edu IBM T. J. Watson Research Center Yorktown

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

Shared Running Buffer Based Proxy Caching of Streaming Sessions

Shared Running Buffer Based Proxy Caching of Streaming Sessions Shared Runnng Buffer Based Proxy Cachng of Streamng Sessons Songqng Chen, Bo Shen, Yong Yan, Sujoy Basu Moble and Meda Systems Laboratory HP Laboratores Palo Alto HPL-23-47 March th, 23* E-mal: sqchen@cs.wm.edu,

More information

Transaction-Consistent Global Checkpoints in a Distributed Database System

Transaction-Consistent Global Checkpoints in a Distributed Database System Proceedngs of the World Congress on Engneerng 2008 Vol I Transacton-Consstent Global Checkponts n a Dstrbuted Database System Jang Wu, D. Manvannan and Bhavan Thurasngham Abstract Checkpontng and rollback

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

A Clustering Algorithm for Chinese Adjectives and Nouns 1

A Clustering Algorithm for Chinese Adjectives and Nouns 1 Clusterng lgorthm for Chnese dectves and ouns Yang Wen, Chunfa Yuan, Changnng Huang 2 State Key aboratory of Intellgent Technology and System Deptartment of Computer Scence & Technology, Tsnghua Unversty,

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

USING GRAPHING SKILLS

USING GRAPHING SKILLS Name: BOLOGY: Date: _ Class: USNG GRAPHNG SKLLS NTRODUCTON: Recorded data can be plotted on a graph. A graph s a pctoral representaton of nformaton recorded n a data table. t s used to show a relatonshp

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Intra-Parametric Analysis of a Fuzzy MOLP

Intra-Parametric Analysis of a Fuzzy MOLP Intra-Parametrc Analyss of a Fuzzy MOLP a MIAO-LING WANG a Department of Industral Engneerng and Management a Mnghsn Insttute of Technology and Hsnchu Tawan, ROC b HSIAO-FAN WANG b Insttute of Industral

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK

FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK L-qng Qu, Yong-quan Lang 2, Jng-Chen 3, 2 College of Informaton Scence and Technology, Shandong Unversty of Scence and Technology,

More information

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss. Today s Outlne Sortng Chapter 7 n Wess CSE 26 Data Structures Ruth Anderson Announcements Wrtten Homework #6 due Frday 2/26 at the begnnng of lecture Proect Code due Mon March 1 by 11pm Today s Topcs:

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

CPU Load Shedding for Binary Stream Joins

CPU Load Shedding for Binary Stream Joins Under consderaton for publcaton n Knowledge and Informaton Systems CPU Load Sheddng for Bnary Stream Jons Bugra Gedk 1,2, Kun-Lung Wu 1, Phlp S. Yu 1 and Lng Lu 2 1 IBM T.J. Watson Research Center, Hawthorne,

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems S. J and D. Shn: An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems 2355 An Effcent Garbage Collecton for Flash Memory-Based Vrtual Memory Systems Seunggu J and Dongkun Shn, Member,

More information

Conditional Speculative Decimal Addition*

Conditional Speculative Decimal Addition* Condtonal Speculatve Decmal Addton Alvaro Vazquez and Elsardo Antelo Dep. of Electronc and Computer Engneerng Unv. of Santago de Compostela, Span Ths work was supported n part by Xunta de Galca under grant

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

3D vector computer graphics

3D vector computer graphics 3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres

More information

Insertion Sort. Divide and Conquer Sorting. Divide and Conquer. Mergesort. Mergesort Example. Auxiliary Array

Insertion Sort. Divide and Conquer Sorting. Divide and Conquer. Mergesort. Mergesort Example. Auxiliary Array Inserton Sort Dvde and Conquer Sortng CSE 6 Data Structures Lecture 18 What f frst k elements of array are already sorted? 4, 7, 1, 5, 1, 16 We can shft the tal of the sorted elements lst down and then

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Connection-information-based connection rerouting for connection-oriented mobile communication networks

Connection-information-based connection rerouting for connection-oriented mobile communication networks Dstrb. Syst. Engng 5 (1998) 47 65. Prnted n the UK PII: S0967-1846(98)90513-7 Connecton-nformaton-based connecton reroutng for connecton-orented moble communcaton networks Mnho Song, Yanghee Cho and Chongsang

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments Effcent Broadcast Dsks Program Constructon n Asymmetrc Communcaton Envronments Eleftheros Takas, Stefanos Ougaroglou, Petros copoltds Department of Informatcs, Arstotle Unversty of Thessalonk Box 888,

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Brave New World Pseudocode Reference

Brave New World Pseudocode Reference Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Steve Setz Wnter 2009 Qucksort Qucksort uses a dvde and conquer strategy, but does not requre the O(N) extra space that MergeSort does. Here s the

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions

SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions SRB: Shared Runnng Buffers n Proxy to Explot Memory Localty of Multple Streamng Meda Sessons Songqng Chen,BoShen, Yong Yan, Sujoy Basu, and Xaodong Zhang Department of Computer Scence Moble and Meda System

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

CSE 326: Data Structures Quicksort Comparison Sorting Bound

CSE 326: Data Structures Quicksort Comparison Sorting Bound CSE 326: Data Structures Qucksort Comparson Sortng Bound Bran Curless Sprng 2008 Announcements (5/14/08) Homework due at begnnng of class on Frday. Secton tomorrow: Graded homeworks returned More dscusson

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Video Proxy System for a Large-scale VOD System (DINA)

Video Proxy System for a Large-scale VOD System (DINA) Vdeo Proxy System for a Large-scale VOD System (DINA) KWUN-CHUNG CHAN #, KWOK-WAI CHEUNG *# #Department of Informaton Engneerng *Centre of Innovaton and Technology The Chnese Unversty of Hong Kong SHATIN,

More information

Intro. Iterators. 1. Access

Intro. Iterators. 1. Access Intro Ths mornng I d lke to talk a lttle bt about s and s. We wll start out wth smlartes and dfferences, then we wll see how to draw them n envronment dagrams, and we wll fnsh wth some examples. Happy

More information

MATHEMATICS FORM ONE SCHEME OF WORK 2004

MATHEMATICS FORM ONE SCHEME OF WORK 2004 MATHEMATICS FORM ONE SCHEME OF WORK 2004 WEEK TOPICS/SUBTOPICS LEARNING OBJECTIVES LEARNING OUTCOMES VALUES CREATIVE & CRITICAL THINKING 1 WHOLE NUMBER Students wll be able to: GENERICS 1 1.1 Concept of

More information

Learning from Multiple Related Data Streams with Asynchronous Flowing Speeds

Learning from Multiple Related Data Streams with Asynchronous Flowing Speeds Learnng from Multple Related Data Streams wth Asynchronous Flowng Speeds Zh Qao, Peng Zhang, Jng He, Jnghua Yan, L Guo Insttute of Computng Technology, Chnese Academy of Scences, Bejng, 100190, Chna. School

More information

arxiv: v3 [cs.ds] 7 Feb 2017

arxiv: v3 [cs.ds] 7 Feb 2017 : A Two-stage Sketch for Data Streams Tong Yang 1, Lngtong Lu 2, Ybo Yan 1, Muhammad Shahzad 3, Yulong Shen 2 Xaomng L 1, Bn Cu 1, Gaogang Xe 4 1 Pekng Unversty, Chna. 2 Xdan Unversty, Chna. 3 North Carolna

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

CACHE MEMORY DESIGN FOR INTERNET PROCESSORS

CACHE MEMORY DESIGN FOR INTERNET PROCESSORS CACHE MEMORY DESIGN FOR INTERNET PROCESSORS WE EVALUATE A SERIES OF THREE PROGRESSIVELY MORE AGGRESSIVE ROUTING-TABLE CACHE DESIGNS AND DEMONSTRATE THAT THE INCORPORATION OF HARDWARE CACHES INTO INTERNET

More information

Concurrent models of computation for embedded software

Concurrent models of computation for embedded software Concurrent models of computaton for embedded software and hardware! Researcher overvew what t looks lke semantcs what t means and how t relates desgnng an actor language actor propertes and how to represent

More information

Real-time Scheduling

Real-time Scheduling Real-tme Schedulng COE718: Embedded System Desgn http://www.ee.ryerson.ca/~courses/coe718/ Dr. Gul N. Khan http://www.ee.ryerson.ca/~gnkhan Electrcal and Computer Engneerng Ryerson Unversty Overvew RTX

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information