Adaptive Load Shedding for Windowed Stream Joins

Size: px
Start display at page:

Download "Adaptive Load Shedding for Windowed Stream Joins"

Transcription

1 Adaptve Load Sheddng for Wndowed Stream Jons Buğra Gedk, Kun-Lung Wu, Phlp S. Yu, Lng Lu College of Computng, Georga Tech Atlanta GA 333 IBM T. J. Watson Research Center Yorktown Heghts NY 598 Abstract We present an adaptve load sheddng approach for wndowed stream jons. In contrast to the conventonal approach of droppng tuples from the nput streams, we explore the concept of selectve processng for load sheddng. We allow stream tuples to be stored n the wndows and shed excessve CPU load by performng the jon operatons, not on the entre set of tuples wthn the wndows, but on a dynamcally changng subset of tuples that are learned to be hghly benefcal. We support such dynamc selectve processng through three forms of runtme adaptatons: adaptaton to nput stream rates, adaptaton to tme correlaton between the streams and adaptaton to jon drectons. Our load sheddng approach enables us to ntegrate utlty-based load sheddng wth tme correlaton-based load sheddng. Indexes are used to further speed up the executon of stream jons. Experments are conducted to evaluate our adaptve load sheddng n terms of output rate and utlty. The results show that our selectve processng approach to load sheddng s very effectve and sgnfcantly outperforms the approach that drops tuples from the nput streams. Introducton Wth the ever ncreasng rate of dgtal nformaton avalable from on-lne sources and networked sensng devces [6], the management of bursty and unpredctable data streams has become a challengng problem. It requres solutons that wll enable applcatons to effectvely access and extract nformaton from such data streams. A promsng soluton for ths problem s to use declaratve query processng engnes specalzed for handlng data streams, such as data stream management systems (DSMS), exemplfed by Aurora [5], STREAM [], and TelegraphCQ [7]. Jons are key operatons n any type of query processng engne and are becomng more mportant wth the ncreas- Submtted to the 3st VLDB Conference, Trondhem, Norway, 5 ng need for fusng data from varous types of sensors avalable, such as envronmental, traffc, and network sensors. Here, we lst some real-lfe applcatons of stream jons. We wll return to these examples when we dscuss assumptons about the characterstcs of the joned streams. Fndng smlar news tems from two dfferent sources: Assumng that news tems from CNN and Reuters are represented by weghted keywords (jon attrbute) n ther respectve streams, we can perform a wndowed nner product jon to fnd smlar news tems. Fndng correlaton between phone calls and stock tradng: Assumng that phone call streams are represented as {...,(P a,p b,t ),...} where (P a,p b,t ) means P a calls P b at tme t, and stock tradng streams are represented as {...,(P b,s x,t ),...} where (P b,s x,t ) means P b trades S x at tme t ; we can perform a wndowed equ-jon on person to fnd hnts, such as: P a hnts S x to P b n the phone call. Fndng correlated attacks from two dfferent streams: Assumng that alerts from two dfferent sources are represented by tuples n the form of (source, target, {attack descrptors},tme) n ther respectve streams, we can perform a wndowed overlap jon on attack descrptors to fnd correlated attacks. Recently, performng jons on unbounded data streams has been actvely studed [, 4, ]. Ths s manly due to the fact that tradtonal jon algorthms are mostly blockng operatons. They need to perform a scan on one of the nputs to produce all the result tuples that match wth a gven tuple from the other nput. However, data streams are unbounded. Blockng s not an opton. To address ths problem, several approaches have been proposed. One natural way of handlng jons on nfnte streams s to use sldng wndows. In a wndowed stream jon, a tuple from one stream s joned wth only the tuples currently avalable n the wndow of another stream. A sldng wndow can be defned as a tme-based or count-based wndow. An example of a tme-based wndow s last seconds tuples and an example of a count-based wndow s last tuples. Wndows can be ether user defned, n whch case we have fxed wndows, or system-defned and thus flexble, n whch case the system uses the avalable memory to maxmze the output sze of the jon. Another way of handlng the problem of blockng jons s to use punctuated streams [], n whch punctuatons that gve hnts about the rest of the stream are used to prevent blockng. The two-way stream jons wth user defned tme-based wndows consttute one of the most common jon types n the

2 pdf pdf data stream management research to date [,, 4]. In order to keep up wth the ncomng rates of streams, CPU load sheddng s usually needed n stream processng systems. Several factors may contrbute to the demand for CPU load sheddng, ncludng (a) bursty and unpredctable rates of the ncomng streams; (b) large wndow szes; and (c) costly jon condtons. Data streams can be unpredctable n nature [5] and ncomng stream rates tend to soar durng peak tmes. A hgh stream rate requres more resources for performng a wndowed jon, due to both ncreased number of tuples receved per unt tme and the ncreased number of tuples wthn a fxed-szed tme wndow. Smlarly, large wndow szes mply that more tuples are needed for processng a wndowed jon. Costly jon condtons typcally requre more CPU tme. In ths paper, we present an adaptve CPU load sheddng approach for wndowed stream jons, amng at maxmzng both the output rate and the output utlty of stream jons. The proposed approach s applcable to all knds of jon condtons, rangng from smple condtons such as equ-jons defned over sngle-valued attrbutes (e.g., the phone calls and stock tradng scenaro) to complex condtons such as those defned over set-valued attrbutes (e.g., the correlated attacks scenaro) or weghted set-valued attrbutes (e.g., the smlar news tems scenaro). Summary of Contrbutons Our adaptve load sheddng approach has several unque characterstcs. Frst, nstead of droppng tuples from the nput streams as proposed n many exstng approaches, our adaptve load sheddng framework follows a selectve processng methodology by keepng tuples wthn the wndows, but processng them aganst a subset of the tuples n the opposte wndow. Second, our approach acheves effectve load sheddng by properly adaptng jon operatons to three dynamc stream propertes: () ncomng stream rates, () tme correlaton between streams and () jon drectons. The amount of selectve processng s adjusted accordng to the ncomng stream rates. Prortzed basc wndows are used to adapt jon operatons to the tme-based correlaton between the nput streams. Partal symmetrc jons are dynamcally employed to take advantage of the most benefcal jon drecton learned from the streams. Thrd, but not the least, our selectve processng approach enables a coherent ntegraton of the three adaptatons wth the utlty-based load sheddng. Maxmzng the utlty of the output tuples produced s especally mportant when certan tuples are more valuable than others. We employ ndexes to speed up the selectve processng of jons. Experments were conducted to evaluate the effectveness of our adaptve load sheddng approach. Our expermental results show that the three adaptatons can effectvely shed the load n the presence of any the followng; bursty and unpredctable rates of the ncomng streams, large wndow szes, or costly jon condtons. Related Work Based on the metrc beng optmzed, related work on load sheddng n wndowed stream jons can be dvded nto two categores. The work n the frst category ams at maxmzng the utlty of the output produced. Dfferent tuples may have dfferent mportance values based on the applcaton. For nstance, n the news jon example, certan type of news, e.g., securty news, may be of hgher value, and smlarly n the stock tradng example, phone calls from nsders may be of hgher nterest when compared to calls from regulars. In ths case, an output from the jon operator that contans hghly-valued tuples s more preferable to a hgher rate output generated from lesser-valued tuples. The work presented n [] uses user-specfed utlty specfcatons to drop tuples from the nput streams wth low utlty values. We refer to ths type of load sheddng as utlty based load sheddng, alsoreferredtoassemantc load sheddng n the lterature. The work n the second category ams at maxmzng the number of output tuples produced [9, 4, ]. Ths can be acheved through rate reducton on the source streams,.e., droppng tuples from the nput streams, as suggested n [6, 4]. The work presented n [4] nvestgates algorthms for evaluatng movng wndow jons over pars of unbounded streams. Although the man focus of [4] s not on load sheddng, scenaros where system resources are nsuffcent to keep up wth the nput streams are also consdered. There are several other works related to load sheddng n DSMSs n general, ncludng memory allocaton among query operators [3] or nter-operator queues [8], load sheddng for aggregaton queres [4], and overloadsenstve management of archved streams [8]. In summary, most of the exstng technques used for sheddng load are tuple droppng for CPU-lmted scenaros and memory allocaton among wndows for memorylmted scenaros. However, droppng tuples from the nput streams wthout payng attenton to the selectvty of such tuples may result n a suboptmal soluton. Based on ths observaton, heurstcs that take nto account selectvty of thetuplesareproposedn[9]. A dfferent ap- tuple drop tme tuple drop tme tme n wndow w tme n wndow w case I case II Fgure : Examples of match probablty densty functons proach, called agebased load sheddng, s proposed recently n [] for performng memory-lmted stream jons. Ths work s based on the observaton that there exsts a tmebased correlaton between the streams. Concretely, the probablty of havng a match between a tuple just receved from one stream and a tuple resdng n the wndow of the opposte stream, may change based on the dfference between the tmestamps of the tuples (assumng tmestamps are assgned based on the arrval tmes of the tuples at the query engne). Under ths observaton, memory s conserved by keepng a tuple n the wndow snce ts recepton untl the average rate of output tuples generated usng ths tuple reaches ts maxmum value. For nstance, n Fgure case I, the tuples can be kept n the wndow untl they reach the vertcal lne marked. Ths effectvely cuts down the memory needed to store the tuples wthn the wndow and yet produces an output close to the actual

3 output wthout wndow reducton. Obvously, knowng the dstrbuton of the ncomng streams has ts peak at the begnnng of the wndow, the age-based wndow reducton can be effectve for sheddng memory load. A natural queston to ask s: Can the agebased wndow reducton approach of [] be used to shed CPU load? Ths s a vald queston, because reducng the wndow sze also decreases the number of comparsons that have to be made n order to evaluate the jon. However, as llustrated n Fgure case II, ths technque cannot drectly extend to the CPU-lmted case where the memory s not the constrant. When the dstrbuton does not have ts peak close to the begnnng of the wndow, the wndow reducton approach has to keep tuples untl they are close to the end of the wndow. As a result, tuples that are close to the begnnng of the wndow and thus are not contrbutng much to the output wll be processed untl the peak s reached close to the end of the wndow. Ths observaton ponts out two mportant facts. Frst, tme-based correlaton between the wndowed streams can play an mportant role n load sheddng. Second, the wndow reducton technque that s effectve for utlzng tme-based correlaton to shed memory load s not sutable for CPU load sheddng, especally when the dstrbuton of the ncomng streams s unknown or unpredctable. Wth the above analyss n mnd, we propose an adaptve load sheddng approach that s capable of performng selectve processng of tuples n the stream wndows by dynamc adaptaton to nput stream rates, tme-based correlatons between the streams, and proftablty of dfferent jon drectons. To the best of our knowledge, our load sheddng approach s the frst one that can handle arbtrary tme correlatons and at the same tme support maxmzaton of output utlty. 3 Overvew Unlke the conventonal load sheddng approach of droppng tuples from the nput streams, our adaptve load sheddng encourages stream tuples to be kept n the wndows. It sheds the CPU load by performng the stream jons on a dynamcally changng subset of tuples that are learned to be hghly benefcal, nstead of on the entre set of tuples stored wthn the wndows. Ths allows us to explot the characterstcs of stream applcatons that exhbt tmebased correlaton between the streams. Concretely, we assume that there exsts a non-flat dstrbuton of probablty of match between a newly-receved tuple and the other tuples n the opposte wndow, dependng on the dfference between the tmestamps of the tuples. There are several reasons behnd ths assumpton. Frst, varable delays can exst between the streams as a result of dfferences between the communcaton overhead of recevng tuples from dfferent sources [9]. Second and more mportantly, there may exst varable delays between related events from dfferent sources. For nstance, n the news jon example, dfferent news agences are expected to have dfferent reacton tmes due to dfferences n ther news collecton and publshng processes. In the stock tradng example, there wll be a tme delay between the phone call contanng the hnt and the acton of buyng the hnted stock. In the correlated attacks example, dfferent parts of the network may have been attacked at dfferent tmes. Note that, the effects of tme correlaton on the data stream jons are to some extent analogous to the effects of the tme of data creaton n data warehouses, whch are exploted by jon algorthms such as Drag-Jon [3]. Although our load sheddng s based on the assumpton that the memory resource s suffcent, we want to pont out two mportant observatons. Frst, wth ncreasng nput stream rates and larger stream wndow szes, t s qute common that CPU becomes lmted before memory does. Second, even under lmted memory, our adaptve load sheddng approach can be used to effectvely shed the excessve CPU load after wndow reducton s performed for handlng the memory constrants. 3. Techncal Hghlghts Our load sheddng approach s best understood through ts two core mechansms, each answerng a fundamental queston on adaptve load sheddng wthout tuple droppng. The frst s called partal processng and t answers the queston of how much we can process gven a wndow of stream tuples. The factors to be consdered n answerng ths queston nclude the performance of the stream jon operaton under current system load and the current ncomng stream rates. In partcular, partal processng dynamcally adjusts the amount of load sheddng to be performed through rate adaptaton. The second s called selectve processng and t answers the queston of what should we process gven the constrant on the amount of processng, defned at the partal processng phase. The factors that nfluence the answer to ths queston nclude the characterstcs of stream wndow segments, the proftablty of jon drectons, and the utlty of dfferent stream tuples. Selectve processng extends partal processng to ntellgently select the tuples to be used durng jon processng under heavy system load, wth the goal of maxmzng the output rate or the output utlty of the stream jon. Before descrbng the detals of partal processng and selectve processng, we frst brefly revew the basc concepts nvolved n processng wndowed stream jons, and establsh the notatons that wll be used throughout the paper. 3. Basc Concepts and Notatons A two-way wndowed stream jon operaton takes two nput streams denoted as S and S, performs the stream jon and generates the output. For notatonal convenence, we denote the opposte stream of stream ( =, ) as stream. The sldng wndow defned over stream S s denoted as W, and has sze w n terms of seconds. We denote a tuple as t and ts arrval tmestamp as T (t). Other notatons wll be ntroduced n the rest of the paper as needed. Table summarzes the notatons used throughout the paper. A wndowed stream jon s performed by fetchng tuples from the nput streams and processng them aganst tuples n the opposte wndow. Fgure llustrates the process of wndowed stream jons. For a newly fetched tuple t from stream S, the jon s performed n the followng three steps.

4 Notaton t T (t) S W w λ B,j b n r δ r r r,z f (.) p,j o,j s j u,z Z Z(t) V(z) ω,z T r T c γ Meanng tuple tmestamp of the tuple t nput stream wndow over S wndow sze of W n seconds rate of S n tuples per second basc wndow j n W basc wndow sze n seconds number of basc wndows n W fracton parameter fracton boost factor fracton parameter for W fracton parameter for W for a tuple of type z match probablty densty functon for W probablty of match for B,j expected output from comparng a tuple t wth a tuple n B,j k, whereo,k s the jth tem n the sorted lst {o,l l [..n ]} expected utlty from comparng a tuple t of type z wth a tuple n W tupletypedoman type of a tuple utlty of a tuple of type z frequency of a tuple of type z n S rate adaptaton perod tme correlaton adaptaton perod samplng probablty Table : Notatons used throughout the paper S output t5 t5 t4 t4 W W W W t3 t3 t t S expred t t t t4 S t5 t3 t Fgure : Stream Jon Example Frst, tuple t s nserted nto the begnnng of wndow Jon Processng() for =to W. Second, tuples f no tuple n S at the end of wndow W contnue are checked n order and t fetch tuple from S Insert t n front of W removed f they have expred. A tuple t o expres repeat t o last tuple n W from wndow W ff T f T T (t o) >w T (t o ) >w, where T represents the current tme. untl T T (t o) w Remove t o from W The expraton check stops when an unexpred tuple s Sort tems n t foreach t a W encountered. The tuples Eval. jon cond. on t, t a n wndow W are sorted n the order of ther arrval Fgure 3: Jon Processng tmestamps by default and the wndow s managed as a doubly lnked lst for effcently performng nserton and t S expraton operatons. In the thrd and last step, tuple t s processed aganst tuples n the wndow W, and matchng tuples are generated as output. Fgure 3 summarzes the jon processng steps. Although not depcted n the pseudo-code, n practce buffers can be placed n the nputs of the jon operator, whch s common practce n DSMS query networks and also useful for maskng small scale rate bursts n stand-alone jons. 4 Partal Processng - How Much Can We Process? The frst step n our approach to sheddng CPU load wthout droppng tuples s to determne how much we can process gven the wndows of stream tuples that partcpate n the jon. We call ths step the partal processng based load sheddng. For nstance, consder a scenaro n whch the lmtaton n processng power requres droppng half of the tuples,.e. decreasng the nput rate of the streams by half. A partal processng approach s to allow every tuple to enter nto the wndows, but to decrease the cost of jon processng by comparng a newly-fetched tuple wth only a fracton of the wndow defned on the opposte stream. Partal processng, by tself, does not sgnfcantly ncrease the number of output tuples produced by the jon operator, when compared to tuple droppng or wndow reducton approaches. However, as we wll descrbe later n the paper, t forms a bass to perform selectve processng, whch explots the tme-based correlaton between the streams, and makes t possble to accommodate utltybased load sheddng, n order to maxmze the output rate or the utlty of the output tuples produced. Two mportant factors are consdered n determnng the amount of partal processng: () the current ncomng stream rates, and () the performance of the stream jon operaton under current system load. Partal processng employs rate adaptaton to adjust the amount of processng performed dynamcally. The performance of the stream jon under the current system load s a crtcal factor and t s nfluenced by the concrete jon algorthm and optmzatons used for performng jon operatons. In the rest of ths secton, we frst descrbe rate adaptaton, then dscuss the detals of utlzng ndexes for effcent jon processng. Fnally we descrbe how to employ rate adaptaton n conjuncton wth ndexed jon processng. 4. Rate Adaptaton The partal processng-based load sheddng s performed by adaptng to the rates of the nput streams. Ths s done by observng the tuple consumpton rate of the jon operaton and comparng t to the nput rates of the streams to determne the fracton of the wndows to be processed. Ths adaptaton s performed perodcally, at every T r seconds. T r s called the adaptaton perod. We denote the fracton parameter as r, whch defnes the rato of the wndows to be processed. In other words, the settng of r answers the queston of how much load we should shed. Algorthm gves a sketch of the rate adaptaton process. Intally, the fracton parameter r s set to. Every T r seconds, the average rates of the nput streams S and S are determned as λ and λ. Smlarly, the num-

5 Algorthm : Rate Adaptaton RateAdapt() () Intally: r () every T r seconds (3) α # of tuples fetched from S snce last adapt. (4) α # of tuples fetched from S snce last adapt. (5) λ average rate of S snce last adaptaton (6) λ average rate of S snce last adaptaton (7) β α +α (λ +λ ) T r (8) f β< then r β r (9) else r mn(,δ r r) ber of tuples fetched from streams S and S snce the last adaptaton step are determned as α and α. Tuples from the nput streams may not be fetched at the rate they arrve due to an napproprate ntal value of the parameter r or due to a change n the stream rates snce the last adaptaton step. As a result, β = α+α (λ +λ ) T r determnes the percentage of the nput tuples fetched by the jon algorthm. Based on the value of β, the fracton parameter r s readjusted at the end of each adaptaton step. If β s smaller than, r s multpled by β, wth the assumpton that comparng a tuple wth the other tuples n the opposte wndow has the domnatng cost n jon processng. Otherwse, the jon s able to process all the ncomng tuples wth the current value of r. In ths case, the r value s set to mn(,δ r r), where δ r s called the fracton boost factor. Ths s amed at ncreasng the fracton of the wndows processed, optmstcally assumng that addtonal processng power s avalable. If not, the parameter r wll be decreased durng the next adaptaton step. Hgher values of the fracton boost factor result n beng more aggressve at ncreasng the parameter r. The adaptaton perod T r should be small enough to adapt to the bursty nature of the streams, but large enough not to cause overhead and undermne the jon processng. 4. Indexed Jon and Partal Processng Stream ndexng [, 3] can be used to cope up wth the hgh processng cost of the jon operaton, reducng the amount of load sheddng performed. However, there are two mportant ponts to be resolved before ndexng can be employed together wth partal processng and thus wth other algorthms we ntroduce n the followng sectons. The frst ssue s that, n a streamng scenaro the ndex has to be mantaned dynamcally (through nsertons and removals) as the tuples enter and leave the wndow. Ths means that the assumpton made n Secton 4. about fndng matchng tuples wthn a wndow (ndex search cost) beng the domnant cost n the jon processng, no longer holds. Second, the ndex does not naturally allow processng only a certan porton of the wndow. We resolve these ssues n the context of nverted ndexes, that are predomnantly used for jons based on set or weghted set-valued attrbutes. The same deas apply to hash-ndexes used for equ-jons on sngle-valued attrbutes. Our nverted-ndex mplementaton reduces to a hash-ndex n the presence of sngle-valued attrbutes. 4.. Inverted Indexes An nverted ndex conssts of a collecton of sorted dentfer lsts. In order to nsert a set nto the ndex, for each tem n the set, the unque dentfer of the set s nserted nto the dentfer lst assocated wth that partcular tem. Smlar to nserton, removal of a set from the ndex requres fndng the dentfer lsts assocated wth the tems n the set. The removal s performed by removng the dentfer of the set from these dentfer lsts. In our context, the nverted ndex s mantaned as an n-memory data structure. The collecton of dentfer lsts are managed n a hashtable. The hashtable s used to effcently fnd the dentfer lst assocated wth an tem. The dentfer lsts are nternally organzed as sorted (based on unque set dentfers) balanced bnary trees to facltate both fast nserton and removal. The set dentfers are n fact ponters to the tuples they represent. Query processng on an nverted ndex follows a multway mergng process, whch s usually accelerated through the use of a heap. Same type of processng s used for all dfferent types of queres we have mentoned so far. Specfcally, gven a query set, the dentfer lsts correspondng to tems n the query set are retreved usng the hashtable. These sorted dentfer lsts are then merged. Ths s done by nsertng the fronters of the lsts nto a mn heap and teratvely removng the topmost set dentfer from the heap and replacng t wth the next set dentfer (new fronter) n ts lst. Durng ths process, the dentfer of an ndexed set, sharng k tems wth the query set, wll be pcked from the heap k consecutve tmes, makng t possble to process relatvely complex overlap and nner product queres effcently [7]. 4.. Tme Ordered Identfer Lsts Although the usage of nverted ndexes speeds up the processng of jons based on set-valued attrbutes, t also ntroduces sgnfcant nserton and deleton costs. Ths problem can be allevated by explotng the tmestamps of the tuples that are beng ndexed and the fact that these tuples are receved n tmestamp order from the nput streams. In partcular, nstead of mantanng dentfer lsts as balanced trees sorted on dentfers, we can mantan them as lnked lsts sorted on tmestamps of the tuples (sets). Ths does not effect the mergng phase of the ndexed search, snce a tmestamp unquely dentfes a tuple n a stream unless dfferent tuples wth equal tmestamps are allowed. In order to handle the latter, the dentfer lsts can be sorted based on (tmestamp, dentfer) pars. 5 Selectve Processng - What Should We Process? Selectve processng extends partal processng to ntellgently select the tuples to be used durng jon processng under heavy system load. Gven the constrant on the amount of processng defned at the partal processng phase, the selectve processng ams at maxmzng the output rate or the output utlty of the stream jons. Three mportant factors are used to determne what we should select for jon processng: () the characterstcs of stream wndow segments, () the proftablty of jon drectons, and (3) the utlty of dfferent stream tuples. We frst descrbe tme correlaton adaptaton and jon drecton adap- For weghted sets, the weghts should also be stored wthn the dentfer lsts, n order to answer nner product queres.

6 Algorthm : Tme Correlaton Adaptaton TmeCorrelatonAdapt() () every T c seconds () for =to (3) sort n desc. order {ô,j j [..n ]} nto array O (4) for j =to n (5) o,j ô,j γ r b λ λ T c (6) s j k, whereo[j] =ô,k (7) for j =to n (8) ô,j taton, whch form the core of our selectve processng approach. Then we dscuss utlty-based load sheddng. The man deas behnd tme correlaton adaptaton and jon drecton adaptaton are to prortze segments (basc wndows) of the wndows n order to process parts that wll yeld hgher output (tme correlaton adaptaton) and to start load sheddng from one of the wndows f one drecton of the jon s producng more output than the other (jon drecton adaptaton). 5. Tme Correlaton Adaptaton For the purpose of tme correlaton adaptaton, we dvde the wndows of the jon nto basc wndows. Concretely, wndow W s dvded nto n basc wndows of sze b seconds each, where n =+ w /b. B,j denotes the jth basc wndow n W, j [..n ]. Tuples do not move from one basc wndow to another. As a result, tuples leave the jon operator one basc wndow at a tme and the basc wndows slde dscretely b seconds at a tme. The newly fetched tuples are nserted nto the frst basc wndow. When the frst basc wndow s full, meanng that the newly fetched tuple has a tmestamp that s at least b seconds larger than the oldest tuple n the frst basc wndow, the last basc wndow s empted and all the basc wndows are shfted, last basc wndow becomng the frst. The newly fetched tuples can now flow nto the new frst basc wndow, whch s empty. The basc wndows are managed n a crcular buffer, so that the shft of wndows s a constant tme operaton. The basc wndows themselves can be organzed as ether lnked lsts (f no ndexng s used) or as nverted/hashed ndexes (f ndexng s used). Tme correlaton adaptaton s perodcally performed at every T c seconds. T c s called the tme correlaton adaptaton perod. Durng the tme between two consecutve adaptaton steps, the jon operaton performs two types of processng. For a newly fetched tuple, t ether performs selectve processng or full processng. Selectve processng s carred out by lookng for matches wth tuples n hgh prorty basc wndows of the opposte wndow, where the number of basc wndows used depends on the amount of load sheddng to be performed. Full processng s done by comparng the newly fetched tuple aganst all the tuples from the opposte wndow. The am of full processng s to collect statstcs about the usefulness of the basc wndows for the jon operaton. The detals of the adaptaton step and full processng are gven n Algorthm and n lnes -5 of Algorthm 3. Full processng s only done for a sampled subset of the stream, based on a parameter called samplng probablty, denoted as γ. A newly fetched tuple goes through selectve processng wth probablty r γ. In other words, t goes Algorthm 3: Tuple Processng and Tme Correlaton ProcessTuple() () when processng tuple t aganst wndow W () f rand < r γ (3) process t aganst all tuples n B,j, j [..n ] (4) foreach match n B,j, j [..n ] (5) ô,j ô,j + (6) else (7) a r W (8) for j =to n (9) a a B,s j () f a> () process t aganst all tuples n B j,s () else (3) r e + a B,s j (4) process t aganst r e fracton of tuples n B,s j (5) break through full processng wth probablty r γ. The fracton parameter r s used to scale the samplng probablty, so that the full processng does not consume all processng resources when the load on the system s hgh. The goal of full processng s to calculate for each basc wndow B,j, the expected number of output tuples produced from comparng a newly fetched tuple t wth a tuple n B,j, denoted as o,j. These values are used later durng the adaptaton step to prortze wndows. In partcular, o,j values are used to calculate s j values. Concretely, we have: s j = k, where o,k s the jth tem n the sorted lst {o,l l [..n ]} Ths means that B,s s the hghest prorty basc wndow n W, B,s s the next, and so on. Lnes 7-4 n Algorthm 3 gve a sketch of selectve processng. Durng selectve processng, s j values are used to gude the load sheddng. Concretely, n order to process a newly fetched tuple t aganst wndow W, frst the number of tuples from wndow W, that are gong to be consdered for processng, s determned by calculatng r W, where W denotes the number of tuples n the wndow. The fracton parameter r s determned by rate adaptaton as descrbed n Secton 4.. Then, tuple t s processed aganst basc wndows, startng from the hghest prorty one,.e. B,s, gong n decreasng order of prorty. A basc wndow B,s j s searched for matches completely, f addng B,s j number of tuples to the number of tuples used so far from wndow W to process tuple t does not exceeds r W. Otherwse an approprate fracton of the basc wndow s used and the processng s completed for tuple t. 5.. Impact of Basc Wndow Sze The settng of basc wndow sze parameter b nvolves trade-offs. Smaller values are better to capture the peak of the match probablty dstrbuton, whle they also ntroduce overhead n processng. For nstance, recallng Secton 4.., n an ndexed jon operaton, the dentfer lsts have to be looked up for each basc wndow. Although the lsts themselves are shorter and the total mergng cost does not ncrease wth smaller basc wndows, the cost of

7 lookng up the dentfer lsts from the hashtables ncreases wth ncreasng number of basc wndows, n. Here we analyze how well the match probablty dstrbuton, whch s dependent on the tme correlaton between the streams, s utlzed for a gven value of the basc wndow sze parameter b, under a gven load condton. We use r to denote the fracton of tuples n jon wndows that can be used for processng tuples. Thus, r s used to model the current load of the system. We assume that r can go over, n whch case abundant processng power s avalable. We use f (.) to denote the match probablty dstrbuton functon for wndow W, where T T f (y)dy gves the probablty that a newly fetched tuple wll match wth a tuple t n W that has a tmestamp T (t) [T T,T T ]. Note that, due to dscrete movement of basc wndows, a basc wndow covers a tme varyng area under the match probablty dstrbuton functon. Ths area, denoted as p,j for basc wndow B,j, can be calculated by observng that the basc wndow B,j covers the area over the nterval [max(,x b +(j ) b),mn(w,x b +(j ) b)] on the tme axs ([,w ]), when only x [, ] fracton of the frst basc wndow s full. Then, we have: p,j = x= mn(w,x b+(j ) b) t=max(,x b+(j ) b) f (y) dy dx For the followng dscusson, we overload the notaton s j, such that sj = k, where p,k s the jth tem n the sorted lst {p,l l [..n ]}. The number of basc wndows whose tuples are all consdered for processng s denoted as c e. The fracton of tuples n the last basc wndow used, that are consdered for processng, s denoted as c p. c p s zero f the last used basc wndow s completely processed. We have: c e = mn(n, r w /b ) c p = { r w c e b b c e <n otherwse Then the area under f that represents the porton of wndow W processed, denoted as p u, can be calculated as: p u c p p s ce+ + c e j= p,s j Let us defne g(f,a) as the maxmum area under the functon f wth a total extent of a on the tme axs. Then we can calculate the optmalty of p u, denoted as φ, as follows: p u φ = g(f,w mn(,r )) When φ =, the jon processng s optmal wth respect to output rate (gnorng the overhead of small basc wndows). Otherwse, the expected output rate s φ tmes the optmal value, under current load condtons (r ) and basc wndow sze settng (b). Fgure 4 plots φ (on z-axs) as a functon of b/w (on x-axs) and r (on y-axs) for two dfferent match probablty dstrbutons, the bottom one beng more skewed. We make the followng three observatons from the fgure: Decreasng avalablty of computatonal resources negatvely nfluences the optmalty of the jon for a fxed basc wndow sze. The ncreasng skewness n the match probablty dstrbuton decreases the optmalty of the jon for a fxed basc wndow sze. Smaller basc wndows szes provde better jon optmalty, when the avalable computatonal resources are low or the match probablty dstrbuton s skewed. As a result, small basc wndow szes are favorable for skewed probablty match dstrbutons and heavy load condtons. We report our expermental study on the effect of overhead, stemmng from managng large number of basc wndows, on the output rate of the jon operaton n Secton Jon Drecton Adaptaton Due to tme-based correlaton between the streams, a newly fetched tuple from stream S may match wth a tuple from stream S that has already made ts way nto the mddle portons of wndow W. Ths means that, most of the tme, a newly fetched tuple from stream S has to stay wthn the wndow W for some tme, before t can be matched wth a tuple from stream S. Ths mples that, one drecton of the jon processng may be of lesser value, n terms of the number of output tuples produced, than the other drecton. For nstance, n the runnng example, processng a newly fetched tuple t from stream S aganst wndow W wll produce smaller number of output tuples when compared to the other way around, as the tuples to match t has not yet arrved at wndow W. In ths case, symmetry of the jon operaton can be broken durng load sheddng, n order to acheve a hgher output rate. Ths can be acheved by decreasng the fracton of tuples processed from wndow W frst, and from W later (f needed). We call ths jon drecton adaptaton. Jon drecton adaptaton s performed mmedately after rate adaptaton. Specfcally, two dfferent fracton parameters are defned, denoted as r for wndow W, {, }. Durng jon processng, r fracton of the tuples n wndow W are consdered, makng t possble to b/w.4.5 r.5 φ φ b/w r densty of match probablty densty of match probablty (T T(t)) / w (T T(t)) / w Fgure 4: Optmalty of the jon for dfferent loads and basc wndow szes under two dfferent match probablty dstrbuton functons

8 Algorthm 4: Jon Drecton Adaptaton JonDrectonAdapt() () Intally: r,r () upon completon of RateAdapt() call (3) o n n j= o,j (4) o n n j= o,j (5) f o o then r mn(,r w +w w ) (6) else r max(,r w +w w w w ) (7) r r w +w w r w w adjust jon drecton by changng r and r. Ths requres replacng r wth r n lne 7 of Algorthm 3 and lne 5 of Algorthm. The constrant n settng of r values s that, the number of tuple comparsons performed per tme unt should stay the same when compared to the case where there s a sngle r value as computed by Algorthm. The number of tuple comparsons performed per tme unt s gven by = (r λ (λ w )), snce the number of tuples n wndow W s λ w. Thus, we should have = (r λ (λ w )) = = (r λ (λ w )),.e.: r (w + w ) = r w + r w The valuable drecton of the jon can be determned by comparng the expected number of output tuples produced from comparng a newly fetched tuple wth a tuple n W, denoted as o,for = and. Ths can be computed as o = n n j= o,j. Assumng o >o, wthout loss of generalty, we can set r = mn(,r w+w w ). Ths maxmzes r, whle respectng the above constrant. The generc procedure to set r and r s gven n Algorthm 4. Jon drecton adaptaton, as t s descrbed n ths secton, assumes that any porton of one of the wndows s more valuable than all portons of the other wndow. Ths may not be the case for applcatons where both match probablty dstrbuton functons, f (t) andf (t), are nonflat. For nstance, n a traffc applcaton scenaro, a two way traffc flow between two ponts mples both drectons of the jon are valuable. We ntroduce a more advanced jon drecton adaptaton algorthm, that can handle such cases, n the next subsecton as part of utlty-based load sheddng. 5.3 Utlty-based Load Sheddng So far, we have targeted our load sheddng algorthms toward maxmzng the number of tuples produced by the jon operaton, a commonly used metrc n the lterature [9, ]. Utlty-based load sheddng, also called semantc load sheddng [], s another metrc employed for gudng load sheddng. It has the beneft of beng able to dstngush hgh utlty output from output contanng large number of tuples. In the context of jon operatons, utlty-based load sheddng promotes output that results from matchng tuples of hgher mportance/utlty. In ths secton, we descrbe how utlty-based load sheddng s ntegrated nto the mechansm descrbed untl now. We assume that each tuple has an assocated mportance level, defned by the type of the tuple, and specfed by the utlty value attached to that type. We denote the tuple type doman as Z, type of a tuple t as Z(t), and utlty of a tuple t, where Z(t) =z Z,asV(z). Type domans and ther assocated utlty values can be set based on applcaton needs. In the rest of the paper, the utlty value of an output tuple of the the jon operaton that s obtaned by matchng tuples t a and t b, s assumed to contrbute a utlty value of max (V(Z(t a )), V(Z(t b ))) to the output. Our approach can also accommodate other functons, lke average (.5 (V(Z(t a )) + V(Z(t b )))). We denote the frequency of appearance of a tuple of type z n stream S as ω,z, where z Z ω,z =. The man dea behnd utlty-based load sheddng s to use a dfferent fracton parameter for each dfferent type of tuple fetched from a dfferent stream, denoted as r,z, where z Zand {, }. The motvaton behnd ths s to do less load sheddng for tuples that provde hgher output utlty. The extra work done for such tuples s compensated by dong more load sheddng for tuples that provde lower output utlty. The expected output utlty obtaned from comparng a tuple t of type z wth a tuple n wndow W s denoted as u,z, and s used to determne r,z values. In order to formalze ths problem, we extend some of the notaton from Secton 5... The number of basc wndows from W whose tuples are all consdered for processng aganst a tuple of type z, s denoted as c e (, z). The fracton of tuples n the last basc wndow used from W, that are consdered for processng, s denoted as c p (, z). c p (, z) s zero f the last used basc wndow s completely processed. Thus, we have: c e (, z) = n r,z c p (, z) = n r,z c e (, z) Then, the area under f that represents the porton of wndow W processed for a tuple of type z, denoted as p u (, z), can be calculated as follows: p u (, z) c p (, z) p,s ce(,z)+ c e(,z) + p,s j j= Wth these defntons, the maxmzaton of the output utlty can be defned formally as ( max λ (λ w ) ( ) ) ω,z u,z p u (, z) = z Z subject to the processng constrant: ( r (w + w )= w ( ) ) ω,z r,z = z Z The r value used here s computed by Algorthm, as part of rate adaptaton. Although the formulaton s complex, ths s ndeed a fractonal knapsack problem and has a greedy optmal soluton. Ths problem can be reformulated as follows: Consder I,j,z as an tem that represents processng of a tuple of type z aganst basc wndow B,j. Item I,j,z has a volume of λ λ ω,z b unts (whch assumng that some bufferng s performed outsde the jon

9 Algorthm 5: Jon Drecton Adapt, Utlty-based Sheddng VJonDrectonAdapt() () upon completon of RateAdapt() call () heap: H (3) for =to (4) foreach z Z (5) r,z (6) v,s,z u,z n o,s / n k= o,k (7) Intalze H wth {v,s,z [..],z Z} (8) a λ λ r (w + w ) (9) whle H s not empty () use, j, z s.t. v,j,z = topmost tem n H () pop the frst tem from H () a a ω,z λ λ b (3) f a> (4) r,z r,z + n (5) else (6) r e + a λ λ ω,z b (7) r,z r,z + re n (8) return (9) f j<n () v,s j+ () nsert v,s j+,z u,z n o,s j+ nto H,z / n k= o,k s the number of comparsons made per tme unt to process ncomng tuples of type z aganst tuples n B,j )and a value of λ λ ω,z b u,z p,s j n unts (whch s the utlty ganed per tme unt, from comparng ncomng tuples of type z wth tuples n B,j ). The am s to pck maxmum number of tems, where fractonal tems are acceptable, so that the total value s maxmzed and the total volume of the pcked tems s at most λ λ r (w +w ). r,j,z [, ] s used to denote how much of tem I,j,z s pcked. Note that the number of unknown varables here (r,j,z s) s (n + n ) Z, and the soluton of the orgnal problem can be calculated from these varables as, r,z = n j= r,j,z. The values of the fracton varables are determned durng jon drecton adaptaton. A smple way to do ths, s to sort the tems based on ther value over volume ratos, v,j,z = u,z p,s j n (note that o,j / n k= o,k can be used as an estmate of p,s j ), and to pck as much as possble of the tem that s most valuable per unt volume. However, snce the number of tems s large, the sort step s costly, especally for large number of basc wndows and large szed domans. A more effcent soluton, wth worst case complexty O( Z +(n + n ) log Z ), s descrbed n Algorthm 5, whch replaces Algorthm 4. Algorthm 5 makes use of the s j values that defne an order between value over volume ratos of tems for a fxed type z and wndow W. The algorthm keeps the tems representng dfferent streams and types wth the hghest value over volume ratos ( Z of them), n a heap. It teratvely pcks an tem from the heap and replaces t wth the tem havng the next hghest value over volume rato wth the same stream and type subscrpt ndex. Ths process contnues untl the capacty constrant s reached. Durng ths process r,z values are calculated progressvely. If the tem pcked represents wndow W and type z, then r,z s ncremented by /n unless the tem s pcked fractonally, n whch case the ncrement on r,z s adjusted accordngly. 6 Experments We report three sets of expermental results to demonstrate effectveness of the algorthms ntroduced n ths paper. The frst set demonstrates the performance of the partal processng-based load sheddng step keepng tuples wthn wndows and sheddng excessve load by partally processng the jon through rate adaptaton. The second set shows the performance gan n terms of output rate for selectve processng, whch ncorporates tme correlaton adaptaton and jon drecton adaptaton. The effect of basc wndow sze on the performance s also nvestgated expermentally. The thrd set of experments presents results on the utlty-based load sheddng mechansms ntroduced and ther ablty to maxmze output utlty under dfferent workloads. 6. Expermental Setup The jon operaton s mplemented as a Java package, named ssjon.*, and s customzable wth respect to supported features, such as rate adaptaton, tme correlaton adaptaton, jon drecton adaptaton, and utlty-based load sheddng, as well as varous parameters assocated wth these features. Streams used n the experments reported n ths secton are tmestamp ordered tuples, where each tuple ncludes a sngle attrbute, that can ether be a set, weghted set, or a sngle value. The sets are composed of varable number of tems, where each tem s an nteger n the range [..L]. L s taken as n the experments. Number of tems contaned n sets follow a normal dstrbuton wth mean µ and standard devaton σ. In the experments, µ s taken as 5 and σ s taken as. The popularty of tems n terms of how frequently they occur n a set, follows a Zpf dstrbuton wth parameter κ. For equ-jons on sngle-valued attrbutes, L s taken as 5 wth µ =andσ =..4 The tme-based correlaton.35 between streams s modeled usng two parameters, tme shft.5.3. parameter denoted as τ and cycle perod parameter.5 denoted as ς. Cycle perod s used to change the popularty ranks of tems as a functon of tme. Intally at tme, the most popular tem s, the next, and so on. Later at tme T, the..5 most popular tem s a =+. L T mod ς ς, the next a +,.5 and so on. Tme shft s used to ntroduce a delay between Fgure 5: Probablty matchng tems from dfferent match dstrbutons, κ = streams. Applyng a tme shft.6 andκ =.8 of τ to one of the streams means that the most popular (T τ) modς tem s a =+ L ς at tme T, for that stream. Fgure 5 shows the resultng probablty of match dstrbuton f, when a tme delay of τ = 5 8 ς s appled to stream S and ς = w, where w = w = w. The two hstograms represent two dfferent scenaros, n whch κ s taken as.6 and.8, respectvely. These settngs for τ and ς parameters are also used n the rest of the experments, unless otherwse stated. We change the value of parameter

10 κ to model varyng amounts of skewness n match probablty dstrbutons. Experments are performed usng tme varyng stream rates and varous wndow szes. The default settngs of some of the system parameters are as follows: T r = 5 seconds, T c = 5 seconds, δ r =., γ =.. We place nput buffers of sze seconds n front of the nputs of the jon operaton. We report results from overlap and equalty jons. Other types of jons show smlar results. The experments are performed on an IBM PC wth 5MB man memory and.4ghz Intel Pentum4 processor, usng Sun JDK.4.. For comparsons, we also mplemented a random drop scheme. It performs load sheddng by randomly droppng tuples from the nput buffers and performng the jon fully wth the avalable tuples n the jon wndows. It s mplemented separately from our selectve jon framework and does not nclude any overhead due to adaptatons. 6. Rate Adaptaton We study the mpact of rate adaptaton on output rate of the jon operaton. For the purpose of the experments n ths subsecton, tme shft parameter s set to zero,.e. τ =, so that there s no tme shft between the streams and the match probablty decreases gong from the begnnng of the wndows to the end. A non-ndexed overlap jon, wth threshold value of 3 and seconds wndow on one of the streams, s used. Fgure 6 shows the stream rates used (on the left y-axs) as a functon of tme. The rate of the streams stay at tuples per second for around 6 seconds, then jump to 5 tuples per seconds for around 5 seconds and drop to 3 tuples per second for around 3 seconds before gong back to ts ntal value. Fgure 6 also shows (on the rght y-axs) how fracton parameter r adapts to the changng stream rates. The graphs n Fgure 7 show the resultng stream output rates as a functon of tme wth and wthout rate adaptaton, respectvely. No rate adaptaton case represents random tuple droppng. It s observed that rate adaptaton mproves output rate when the stream rates ncrease. That s the tme when tuple droppng starts for the non-adaptve case. The mprovement s around % when stream rates are 5 tuples per second and around 5% when 3 tuples per second. The ablty of rate adaptaton to keep output rate hgh s manly due to the tme algned nature of the streams. In ths scenaro, only the tuples that are closer to the begnnng of the wndow are useful for generatng matches and the partal processng uses the begnnng part of the wndow, as dctated by the fracton parameter r. The graphs n Fgure 8 plot the average output rates of the jon over the perod shown n Fgure 7 as a functon of skewness parameter κ, for dfferent wndow szes. It shows that the mprovement n output rate, provded by rate adaptaton, ncreases not only wth ncreasng skewness of the match probablty dstrbuton, but also wth ncreasng wndow szes. Ths s because, larger wndows mply that more load sheddng has to be performed. 6.3 Selectve Processng Here, we study the mpact of tme correlaton adaptaton and jon drecton adaptaton on output rate of the jon operaton. For the purpose of the experments n ths subsecton, tme shft parameter s taken as τ = 5 8 ς. A non-ndexed overlap jon, wth threshold value of 3 and seconds wndows on both of the streams, s used. Basc wndow szes on both wndows are set to second for tme correlaton adaptaton. Fgure 9 shows the stream rates used (on the left y-axs) as a functon of tme. Fgure 9 also shows (on the rght y-axs) how fracton parameters r and r adapt to the changng stream rates wth jon drecton adaptaton. Note that the reducton n fracton parameter values start wth the one (r n ths case) correspondng to the wndow that s less useful n terms of generatng output tuples when processed aganst a newly fetched tuple from the other stream. The graphs n Fgure show the resultng stream output rates as a functon of tme wth three dfferent jon settngs. It s observed that, when the stream rates ncrease, the tme correlaton adaptaton combned wth rate adaptaton provdes mprovement on output rate (around 5%), when compared to rate adaptaton only case. Moreover, applyng jon drecton adaptaton on top of tme correlaton adaptaton provdes addtonal mprovement n output rate (around 4%). The graphs n Fgure plot the average output rates of the jon as a functon of skewness parameter κ, fordfferent jon settngs. Ths tme, the overlap threshold s set to 4, whch results n lower number of matchng tuples. It s observed that the mprovement n output rates, provded by tme correlaton and jon drecton adaptaton, ncrease wth ncreasng skewness n match probablty dstrbuton. The ncreasng skewness does not mprove the performance of rate adaptve-only case, due to ts lack of tme correlaton adaptaton whch n turn makes t unable to locate the productve porton of the wndow for processng, especally when the tme lag τ s large and the fracton parameter r s small. To strengthen and extend the observaton from Fgures 7 and 8 that partal processng s superor to random droppng and the observaton from Fgures and that selectve processng provdes addtonal mprovements n output rates on top of partal processng, n Fgure we compare random droppng to selectve processng for equjons on sngle-valued attrbutes. The results are even more remarkable than the results for complex jon condtons. Fgure plots output rates of the jon as a functon of the nput rates (from tuples/sec to tuples/sec) for random droppng, rate adaptve, and rate and match dstrbuton adaptve cases. The fgure shows that selectve processng wth rate and match dstrbuton adaptaton provdes up to 5tmesmprovement over random droppng, and up to 35% mprovement over rate adaptve-only case. Note that the output rate frst ncreases wth ncreasng nput rates and then shows a decrease wth further ncrease n nput rates. Ths s manly due to the smulaton setup, where workload generaton takes ncreasngly more processng tme wth ncreasng nput rates (smlar observatons are reported by others [4]). As a consequence, load adaptve nature of the proposed jon algorthms results n decreasng the amount of processng performed for

Adaptive Load Shedding for Windowed Stream Joins

Adaptive Load Shedding for Windowed Stream Joins Adaptve Load Sheddng for Wndowed Stream Jons Bu gra Gedk College of Computng, GaTech bgedk@cc.gatech.edu Kun-Lung Wu, Phlp Yu T.J. Watson Research, IBM {klwu,psyu}@us.bm.com Lng Lu College of Computng,

More information

CPU Load Shedding for Binary Stream Joins

CPU Load Shedding for Binary Stream Joins Under consderaton for publcaton n Knowledge and Informaton Systems CPU Load Sheddng for Bnary Stream Jons Bugra Gedk 1,2, Kun-Lung Wu 1, Phlp S. Yu 1 and Lng Lu 2 1 IBM T.J. Watson Research Center, Hawthorne,

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

SAO: A Stream Index for Answering Linear Optimization Queries

SAO: A Stream Index for Answering Linear Optimization Queries SAO: A Stream Index for Answerng near Optmzaton Queres Gang uo Kun-ung Wu Phlp S. Yu IBM T.J. Watson Research Center {luog, klwu, psyu}@us.bm.com Abstract near optmzaton queres retreve the top-k tuples

More information

Efficient Distributed File System (EDFS)

Efficient Distributed File System (EDFS) Effcent Dstrbuted Fle System (EDFS) (Sem-Centralzed) Debessay(Debsh) Fesehaye, Rahul Malk & Klara Naherstedt Unversty of Illnos-Urbana Champagn Contents Problem Statement, Related Work, EDFS Desgn Rate

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory Background EECS. Operatng System Fundamentals No. Vrtual Memory Prof. Hu Jang Department of Electrcal Engneerng and Computer Scence, York Unversty Memory-management methods normally requres the entre process

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions Sortng Revew Introducton to Algorthms Qucksort CSE 680 Prof. Roger Crawfs Inserton Sort T(n) = Θ(n 2 ) In-place Merge Sort T(n) = Θ(n lg(n)) Not n-place Selecton Sort (from homework) T(n) = Θ(n 2 ) In-place

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe CSCI 104 Sortng Algorthms Mark Redekopp Davd Kempe Algorthm Effcency SORTING 2 Sortng If we have an unordered lst, sequental search becomes our only choce If we wll perform a lot of searches t may be benefcal

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Summarizing Data using Bottom-k Sketches

Summarizing Data using Bottom-k Sketches Summarzng Data usng Bottom-k Sketches Edth Cohen AT&T Labs Research 8 Park Avenue Florham Park, NJ 7932, USA edth@research.att.com Ham Kaplan School of Computer Scence Tel Avv Unversty Tel Avv, Israel

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Shared Running Buffer Based Proxy Caching of Streaming Sessions

Shared Running Buffer Based Proxy Caching of Streaming Sessions Shared Runnng Buffer Based Proxy Cachng of Streamng Sessons Songqng Chen, Bo Shen, Yong Yan, Sujoy Basu Moble and Meda Systems Laboratory HP Laboratores Palo Alto HPL-23-47 March th, 23* E-mal: sqchen@cs.wm.edu,

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Private Information Retrieval (PIR)

Private Information Retrieval (PIR) 2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market

More information

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Learning-Based Top-N Selection Query Evaluation over Relational Databases Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Advanced Computer Networks

Advanced Computer Networks Char of Network Archtectures and Servces Department of Informatcs Techncal Unversty of Munch Note: Durng the attendance check a stcker contanng a unque QR code wll be put on ths exam. Ths QR code contans

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution Dynamc Voltage Scalng of Supply and Body Bas Explotng Software Runtme Dstrbuton Sungpack Hong EE Department Stanford Unversty Sungjoo Yoo, Byeong Bn, Kyu-Myung Cho, Soo-Kwan Eo Samsung Electroncs Taehwan

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss. Today s Outlne Sortng Chapter 7 n Wess CSE 26 Data Structures Ruth Anderson Announcements Wrtten Homework #6 due Frday 2/26 at the begnnng of lecture Proect Code due Mon March 1 by 11pm Today s Topcs:

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Run-Time Operator State Spilling for Memory Intensive Long-Running Queries

Run-Time Operator State Spilling for Memory Intensive Long-Running Queries Run-Tme Operator State Spllng for Memory Intensve Long-Runnng Queres Bn Lu, Yal Zhu, and lke A. Rundenstener epartment of Computer Scence, Worcester Polytechnc Insttute Worcester, Massachusetts, USA {bnlu,

More information

Chapter 6 Programmng the fnte element method Inow turn to the man subject of ths book: The mplementaton of the fnte element algorthm n computer programs. In order to make my dscusson as straghtforward

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Can We Beat the Prefix Filtering? An Adaptive Framework for Similarity Join and Search

Can We Beat the Prefix Filtering? An Adaptive Framework for Similarity Join and Search Can We Beat the Prefx Flterng? An Adaptve Framework for Smlarty Jon and Search Jannan Wang Guolang L Janhua Feng Department of Computer Scence and Technology, Tsnghua Natonal Laboratory for Informaton

More information

Report on On-line Graph Coloring

Report on On-line Graph Coloring 2003 Fall Semester Comp 670K Onlne Algorthm Report on LO Yuet Me (00086365) cndylo@ust.hk Abstract Onlne algorthm deals wth data that has no future nformaton. Lots of examples demonstrate that onlne algorthm

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT

DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT DESIGNING TRANSMISSION SCHEDULES FOR WIRELESS AD HOC NETWORKS TO MAXIMIZE NETWORK THROUGHPUT Bran J. Wolf, Joseph L. Hammond, and Harlan B. Russell Dept. of Electrcal and Computer Engneerng, Clemson Unversty,

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Self-tuning Histograms: Building Histograms Without Looking at Data

Self-tuning Histograms: Building Histograms Without Looking at Data Self-tunng Hstograms: Buldng Hstograms Wthout Lookng at Data Ashraf Aboulnaga Computer Scences Department Unversty of Wsconsn - Madson ashraf@cs.wsc.edu Surajt Chaudhur Mcrosoft Research surajtc@mcrosoft.com

More information

AADL : about scheduling analysis

AADL : about scheduling analysis AADL : about schedulng analyss Schedulng analyss, what s t? Embedded real-tme crtcal systems have temporal constrants to meet (e.g. deadlne). Many systems are bult wth operatng systems provdng multtaskng

More information

CACHE MEMORY DESIGN FOR INTERNET PROCESSORS

CACHE MEMORY DESIGN FOR INTERNET PROCESSORS CACHE MEMORY DESIGN FOR INTERNET PROCESSORS WE EVALUATE A SERIES OF THREE PROGRESSIVELY MORE AGGRESSIVE ROUTING-TABLE CACHE DESIGNS AND DEMONSTRATE THAT THE INCORPORATION OF HARDWARE CACHES INTO INTERNET

More information

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009. Farrukh Jabeen Algorthms 51 Assgnment #2 Due Date: June 15, 29. Assgnment # 2 Chapter 3 Dscrete Fourer Transforms Implement the FFT for the DFT. Descrbed n sectons 3.1 and 3.2. Delverables: 1. Concse descrpton

More information

Optimal Workload-based Weighted Wavelet Synopses

Optimal Workload-based Weighted Wavelet Synopses Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,

More information

Intro. Iterators. 1. Access

Intro. Iterators. 1. Access Intro Ths mornng I d lke to talk a lttle bt about s and s. We wll start out wth smlartes and dfferences, then we wll see how to draw them n envronment dagrams, and we wll fnsh wth some examples. Happy

More information

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search Sequental search Buldng Java Programs Chapter 13 Searchng and Sortng sequental search: Locates a target value n an array/lst by examnng each element from start to fnsh. How many elements wll t need to

More information

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7

Channel 0. Channel 1 Channel 2. Channel 3 Channel 4. Channel 5 Channel 6 Channel 7 Optmzed Regonal Cachng for On-Demand Data Delvery Derek L. Eager Mchael C. Ferrs Mary K. Vernon Unversty of Saskatchewan Unversty of Wsconsn Madson Saskatoon, SK Canada S7N 5A9 Madson, WI 5376 eager@cs.usask.ca

More information

Avoiding congestion through dynamic load control

Avoiding congestion through dynamic load control Avodng congeston through dynamc load control Vasl Hnatyshn, Adarshpal S. Seth Department of Computer and Informaton Scences, Unversty of Delaware, Newark, DE 976 ABSTRACT The current best effort approach

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier Some materal adapted from Mohamed Youns, UMBC CMSC 611 Spr 2003 course sldes Some materal adapted from Hennessy & Patterson / 2003 Elsever Scence Performance = 1 Executon tme Speedup = Performance (B)

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Help for Time-Resolved Analysis TRI2 version 2.4 P Barber,

Help for Time-Resolved Analysis TRI2 version 2.4 P Barber, Help for Tme-Resolved Analyss TRI2 verson 2.4 P Barber, 22.01.10 Introducton Tme-resolved Analyss (TRA) becomes avalable under the processng menu once you have loaded and selected an mage that contans

More information

Analysis of Collaborative Distributed Admission Control in x Networks

Analysis of Collaborative Distributed Admission Control in x Networks 1 Analyss of Collaboratve Dstrbuted Admsson Control n 82.11x Networks Thnh Nguyen, Member, IEEE, Ken Nguyen, Member, IEEE, Lnha He, Member, IEEE, Abstract Wth the recent surge of wreless home networks,

More information

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky Improvng Low Densty Party Check Codes Over the Erasure Channel The Nelder Mead Downhll Smplex Method Scott Stransky Programmng n conjuncton wth: Bors Cukalovc 18.413 Fnal Project Sprng 2004 Page 1 Abstract

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Query and Update Load Shedding With MobiQual

Query and Update Load Shedding With MobiQual Dodda Sudeep et al,int.j.computer Technology & Applcatons,Vol 3 (1), 470-474 Query and Update Load Sheddng Wth MobQual Dodda Sudeep 1, Bala Krshna 2 1 Pursung M.Tech(CS), Nalanda Insttute of Engneerng

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Desgn and Analyss of Algorthms Heaps and Heapsort Reference: CLRS Chapter 6 Topcs: Heaps Heapsort Prorty queue Huo Hongwe Recap and overvew The story so far... Inserton sort runnng tme of Θ(n 2 ); sorts

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

Assembler. Building a Modern Computer From First Principles.

Assembler. Building a Modern Computer From First Principles. Assembler Buldng a Modern Computer From Frst Prncples www.nand2tetrs.org Elements of Computng Systems, Nsan & Schocken, MIT Press, www.nand2tetrs.org, Chapter 6: Assembler slde Where we are at: Human Thought

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

Greedy Technique - Definition

Greedy Technique - Definition Greedy Technque Greedy Technque - Defnton The greedy method s a general algorthm desgn paradgm, bult on the follong elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon:

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) , VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Virtual Machine Migration based on Trust Measurement of Computer Node

Virtual Machine Migration based on Trust Measurement of Computer Node Appled Mechancs and Materals Onlne: 2014-04-04 ISSN: 1662-7482, Vols. 536-537, pp 678-682 do:10.4028/www.scentfc.net/amm.536-537.678 2014 Trans Tech Publcatons, Swtzerland Vrtual Machne Mgraton based on

More information

CE 221 Data Structures and Algorithms

CE 221 Data Structures and Algorithms CE 1 ata Structures and Algorthms Chapter 4: Trees BST Text: Read Wess, 4.3 Izmr Unversty of Economcs 1 The Search Tree AT Bnary Search Trees An mportant applcaton of bnary trees s n searchng. Let us assume

More information

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments Effcent Broadcast Dsks Program Constructon n Asymmetrc Communcaton Envronments Eleftheros Takas, Stefanos Ougaroglou, Petros copoltds Department of Informatcs, Arstotle Unversty of Thessalonk Box 888,

More information

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments

Comparison of Heuristics for Scheduling Independent Tasks on Heterogeneous Distributed Environments Comparson of Heurstcs for Schedulng Independent Tasks on Heterogeneous Dstrbuted Envronments Hesam Izakan¹, Ath Abraham², Senor Member, IEEE, Václav Snášel³ ¹ Islamc Azad Unversty, Ramsar Branch, Ramsar,

More information

arxiv: v3 [cs.ds] 7 Feb 2017

arxiv: v3 [cs.ds] 7 Feb 2017 : A Two-stage Sketch for Data Streams Tong Yang 1, Lngtong Lu 2, Ybo Yan 1, Muhammad Shahzad 3, Yulong Shen 2 Xaomng L 1, Bn Cu 1, Gaogang Xe 4 1 Pekng Unversty, Chna. 2 Xdan Unversty, Chna. 3 North Carolna

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Topology Design using LS-TaSC Version 2 and LS-DYNA

Topology Design using LS-TaSC Version 2 and LS-DYNA Topology Desgn usng LS-TaSC Verson 2 and LS-DYNA Wllem Roux Lvermore Software Technology Corporaton, Lvermore, CA, USA Abstract Ths paper gves an overvew of LS-TaSC verson 2, a topology optmzaton tool

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

ELEC 377 Operating Systems. Week 6 Class 3

ELEC 377 Operating Systems. Week 6 Class 3 ELEC 377 Operatng Systems Week 6 Class 3 Last Class Memory Management Memory Pagng Pagng Structure ELEC 377 Operatng Systems Today Pagng Szes Vrtual Memory Concept Demand Pagng ELEC 377 Operatng Systems

More information

3. CR parameters and Multi-Objective Fitness Function

3. CR parameters and Multi-Objective Fitness Function 3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft

More information