Adaptive Load Shedding for Windowed Stream Joins

Similar documents
Adaptive Load Shedding for Windowed Stream Joins

CPU Load Shedding for Binary Stream Joins

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Parallelism for Nested Loops with Non-uniform and Flow Dependences

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Wishing you all a Total Quality New Year!

A Binarization Algorithm specialized on Document Images and Photos

SAO: A Stream Index for Answering Linear Optimization Queries

Efficient Distributed File System (EDFS)

Machine Learning: Algorithms and Applications

Reducing Frame Rate for Object Tracking

S1 Note. Basis functions.

An Optimal Algorithm for Prufer Codes *

Summarizing Data using Bottom-k Sketches

Virtual Memory. Background. No. 10. Virtual Memory: concept. Logical Memory Space (review) Demand Paging(1) Virtual Memory

Performance Evaluation of Information Retrieval Systems

An Entropy-Based Approach to Integrated Information Needs Assessment

CMPS 10 Introduction to Computer Science Lecture Notes

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

Hermite Splines in Lie Groups as Products of Geodesics

Sorting Review. Sorting. Comparison Sorting. CSE 680 Prof. Roger Crawfis. Assumptions

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

The Codesign Challenge

Problem Set 3 Solutions

Programming in Fortran 90 : 2017/2018

TN348: Openlab Module - Colocalization

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

User Authentication Based On Behavioral Mouse Dynamics Biometrics

CS 534: Computer Vision Model Fitting

Meta-heuristics for Multidimensional Knapsack Problems

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Can We Beat the Prefix Filtering? An Adaptive Framework for Similarity Join and Search

Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution

Cluster Analysis of Electrical Behavior

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Support Vector Machines

Analysis of Continuous Beams in General

Efficient Broadcast Disks Program Construction in Asymmetric Communication Environments

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Shared Running Buffer Based Proxy Caching of Streaming Sessions

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur


SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

Optimizing Document Scoring for Query Retrieval

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

Self-tuning Histograms: Building Histograms Without Looking at Data

Sequential search. Building Java Programs Chapter 13. Sequential search. Sequential search

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

X- Chart Using ANOM Approach

Greedy Technique - Definition

Module Management Tool in Software Development Organizations

Feature Reduction and Selection

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Load Balancing for Hex-Cell Interconnection Network

Optimal Workload-based Weighted Wavelet Synopses

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) , Fax: (370-5) ,

Parallel matrix-vector multiplication

AADL : about scheduling analysis

Run-Time Operator State Spilling for Memory Intensive Long-Running Queries

Advanced Computer Networks

TPL-Aware Displacement-driven Detailed Placement Refinement with Coloring Constraints

Intro. Iterators. 1. Access

Mathematics 256 a course in differential equations for engineering students

Today s Outline. Sorting: The Big Picture. Why Sort? Selection Sort: Idea. Insertion Sort: Idea. Sorting Chapter 7 in Weiss.

Avoiding congestion through dynamic load control

Hybrid Non-Blind Color Image Watermarking

3. CR parameters and Multi-Objective Fitness Function

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Private Information Retrieval (PIR)

Smoothing Spline ANOVA for variable screening

Distributed Resource Scheduling in Grid Computing Using Fuzzy Approach

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

Report on On-line Graph Coloring

Needed Information to do Allocation

REFRACTIVE INDEX SELECTION FOR POWDER MIXTURES

Help for Time-Resolved Analysis TRI2 version 2.4 P Barber,

A Hybrid Genetic Algorithm for Routing Optimization in IP Networks Utilizing Bandwidth and Delay Metrics

y and the total sum of

Test-Cost Modeling and Optimal Test-Flow Selection of 3D-Stacked ICs

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Lecture 5: Multilayer Perceptrons

Simulation Based Analysis of FAST TCP using OMNET++

Biostatistics 615/815

Hierarchical clustering for gene expression data analysis

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

Video Proxy System for a Large-scale VOD System (DINA)

Support Vector Machines

Analysis of Collaborative Distributed Admission Control in x Networks

Related-Mode Attacks on CTR Encryption Mode

Real-Time Guarantees. Traffic Characteristics. Flow Control

Improving Low Density Parity Check Codes Over the Erasure Channel. The Nelder Mead Downhill Simplex Method. Scott Stransky

High level vs Low Level. What is a Computer Program? What does gcc do for you? Program = Instructions + Data. Basic Computer Organization

Brave New World Pseudocode Reference

Transcription:

Adaptve Load Sheddng for Wndowed Stream Jons Bu gra Gedk College of Computng, GaTech bgedk@cc.gatech.edu Kun-Lung Wu, Phlp Yu T.J. Watson Research, IBM {klwu,psyu}@us.bm.com Lng Lu College of Computng, GaTech lnglu@cc.gatech.edu ABSTRACT We present an adaptve load sheddng approach for wndowed stream jons. In contrast to the conventonal approach of droppng tuples from the nput streams, we explore the concept of selectve processng for load sheddng, focusng on costly stream jons such as those over set-valued or weghted set-valued attrbutes. The man dea of our adaptve load sheddng approach s two-fold. Frst, we allow stream tuples to be stored n the wndows and shed excessve CPU load by performng the stream jon operatons, not on the entre set of tuples wthn the wndows, but on a dynamcally changng subset of tuples that are hghly benefcal. Second, we support such dynamc selectve processng through three forms of runtme adaptatons: Byadaptaton to nput stream rates, we perform partal processng based load sheddng and dynamcally determne the fracton of the wndows to be processed by comparng the tuple consumpton rate of jon operaton to the ncomng stream rates. By adaptaton to tme correlaton between the streams, we dynamcally determne the number of basc wndows to be used and prortze the tuples for selectve processng, encouragng CPU-lmted executon of stream jons n hgh prorty basc wndows. By adaptaton to jon drectons, we dynamcally determne the most benefcal drecton to perform stream jons n order to process more useful tuples under heavy load condtons and boost the utlty or number of output tuples produced. Our load sheddng framework not only enables us to ntegrate utlty-based load sheddng wth tme correlaton-based load sheddng, but more mportantly, t also allows load sheddng to be adaptve to varous dynamc stream propertes. Inverted ndexes are used to further speed up the executon of stream jons based on set-valued attrbutes. Experments are conducted to evaluate the effectveness of our adaptve load sheddng approach n terms of output rate and utlty.. INTRODUCTION Wth the ever ncreasng rate of dgtal nformaton avalable from on-lne sources and networked sensng devces [7], the management of bursty and unpredctable data streams has become a challengng problem. It requres solutons that wll enable applcatons to effectvely access and extract nformaton from such data streams. A promsng soluton for ths problem s to use declaratve query processng engnes specalzed for handlng data streams, such as data stream Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. Copyrght X ACM X-XXXXX-XX-X/XX/XX...$5.. management systems (DSMS), exemplfed by Aurora [5], STREAM [], and TelegraphCQ [7]. Jons are key operatons n any type of query processng engne. Below we lst some real-lfe applcaton examples of stream jons. We return to these examples when we dscuss assumptons about the characterstcs of the joned streams n later sectons. Fndng smlar news tems from two dfferent sources: Assumng that news tems from two dfferent sources (such as CNN, Reuters) are represented by weghted keywords (jon attrbute) n ther respectve streams, we can perform a wndowed nner product jon on them to fnd smlar news tems. Fndng correlated attacks from two dfferent alert streams Assumng that alerts from two dfferent sources are represented by tuples n the form (source, target, {attack descrptors}, tme) n ther respectve streams, we can perform a wndowed overlap jon on attack descrptors to fnd correlated attacks. Fndng correlaton between phone calls and stock tradng Assumng that phone call streams are represented as {...,(P a,p b,t ),...} where (P a,p b,t )meansp a calls P b at tme t, and stock tradng streams are represented as {...,(P b,s x,t ),...} where (P b,s x,t )means P b trades S x at tme t ; we can perform a wndowed equ-jon on person attrbute to fnd hnts, such as: P a hnts S x to P b n the phone call. As a result, jons on unbounded data streams have recently enjoyed strong nterest n data stream management research [, 4, ]. Ths s manly due to the fact that most of the tradtonal jon algorthms are blockng operatons. They need to perform a scan on one of the nputs to produce all result tuples that match wth a gven tuple from the other nput. However, data streams are unbounded, thus blockng s not an opton. Several proposals have been put forth to address the problem of blockng jons, and they vary dependng on the concrete applcatons at hand. One natural way of handlng jons on nfnte streams s to use sldng wndows. In a wndowed stream jon, a tuple from one stream s joned wth only the tuples currently avalable n the wndow of another stream. A sldng wndow can be defned as a tme-based or count-based wndow. An example of a tme-based wndow s last seconds tuples and an example of a count-based wndow s last tuples. Wndows can be ether user defned, n whch case we have fxed wndows, or system-defned and thus flexble, n whch case the system uses the avalable memory to maxmze the output sze of the jon. Another way of handlng the problem of blockng jons s to use punctuated streams [4],

n whch punctuatons that gve hnts about the rest of the stream are used to prevent blockng. The two-way stream jons wth user defned tme-based wndows consttute one of the most common jon types n the data stream management research to date [,, 4]. In order to keep up wth the ncomng rates of streams, load sheddng s usually needed n stream processng systems. Several factors may contrbute to the demand for load sheddng, ncludng (a) bursty and unpredctable rates of the ncomng streams; (b) large wndow szes; and (c) costly jon condtons. Data streams can be unpredctable n nature [5] and ncomng stream rates tend to soar durng peak tmes. A hgh stream rate requres more resources for performng a wndowed jon, due to both ncreased number of tuples receved per unt tme and the ncreased number of tuples wthn a fxed-szed tme wndow. Smlarly, large wndow szes mply that more tuples are needed for processng a wndowed jon. Costly jon condtons typcally requre more CPU tme, such as jon condtons defned on set-valued and weghted set-valued attrbutes. A set-valued attrbute naturally occurs when an attrbute takes more than one value from a doman. For nstance, an attrbute flagcolor can take the value {whte, red, green}. Mostly used jon condtons on set-valued attrbutes nclude subset ( ), equalty(=), superset ( ), and overlap (θ k ). Overlap jon, θ k, fnds the par of tuples wth set-valued attrbutes that share at least k tems. For nstance, an attrbute requredsklls can possbly take the values {Java, Oracle, C++, XML} and {C,Java,BSc,Oracle}, where the overlap of these two set values has tems {Java, Oracle}. A weghted set value provdes a stronger model to represent dfferent objects. Any sparse pont from a large vector space can be compactly represented as a set of weghted tems. For nstance, a document can be represented as a vector where dmensons represent words and coordnates represent weghts of words (ex. assgned by tf-df weghtng [6]). Ths vector can be converted nto a weghted set, where each tem-weght par n the set corresponds to a non-zero weghted word. Smlarly a multmeda object can be represented as a feature vector and then converted nto a weghted set. A typcal jon condton for weghted set-valued attrbutes can be nner product of ther vector representatons. An nner product jon wth threshold value d, tres to fnd pars of attrbutes whose nner product s larger than or equal to d. In ths paper we propose an adaptve load sheddng framework for costly wndowed stream jons. The man dea s to reduce the amount of CPU load by judcously performng jons on a selectve subset of hgh-valued tuples from the wndows and makng the selecton decson through dynamc adaptatons to ncomng stream rates, tme-based correlaton between streams, and jon drectons. Gven that the output of a wndowed stream jon wth load sheddng s only a subset of the output of an off-lne jon, the goal of our load sheddng, n the context of wndowed stream jons, s to shed load n such a way that the number of output tuples produced or the utlty ganed by the produced tuples s maxmzed. Maxmzng the utlty of the output tuples produced s especally mportant when certan tuples are more valuable than others. Summary of Contrbutons In the rest of paper we present an adaptve load sheddng framework for wndowed stream jons, amng at maxmzng both the output rate and the output utlty of stream jons. We focus on costly stream jons such as those over set-valued or weghted set-valued attrbutes. Our approach has several unque characterstcs. Frst, nstead of droppng tuples from the nput streams as proposed n many exstng approaches, our adaptve load sheddng framework follows a selectve processng methodology by keepng tuples wthn the wndows, but processng only a subset of them, when they are more useful, based on tme-based correlaton between the streams. Second, our approach acheves load sheddng by performng adaptaton n three dmensons. Through rate adaptaton, our stream jon approach adapts to the ncomng stream rates to shed load by adjustng the amount of selectve processng. Through tme correlaton adaptaton, the selectve processng of stream jons adapts to the tme-based correlaton between the streams through the use of basc wndows, prortzed based on the match probablty densty functon learned from the analyss of streams. The learnng s done by performng full processng nstead of selectve processng for a sampled part of the stream. Through jon drecton adaptaton, our approach dynamcally determnes the most benefcal jon drecton by movng from symmetrc-jon to partal symmetrc and partal asymmetrc jon as dctated by the load on the system. Thrd but not the least, our selectve processng wth three levels of adaptatons enables a coherent ntegraton wth the utlty-based load sheddng, n order to maxmze utlty of the output tuples wthout resortng to droppng tuples n nput streams. We employ nverted ndexes, commonly used n set based jon processng [8, ], to our wndowed stream jons to speed up the selectve processng of jons. We nclude a set of experments conducted to evaluate the effectveness of our adaptve load sheddng approach. Our expermental results show that the three levels of adaptatons can effectvely shed the load n the presence of bursty and unpredctable rates of the ncomng streams, the large wndow szes, and the costly jon condtons, such as those on set-valued attrbutes.. ALTERNATIVE APPROACHES AND RE- LATED WORK We can broadly dvde the related work on load sheddng n wndowed stream jons nto two categores, based on the metrc beng optmzed. The work n the frst category ams at maxmzng the utlty of the output produced. Dfferent tuples may have dfferent mportance values based on the applcaton. For nstance, n the news jon example, certan type of news, e.g., securty news, may be of hgher value, and smlarly n the stock tradng example, phone calls from nsders may be of hgher nterest when compared to calls from regulars. In ths case, an output from the jon operator that contans hghly-valued tuples s more preferable to a hgher rate output generated from lesser-valued tuples. The work presented n [3] uses user-specfed utlty specfcatons to drop tuples from the nput streams wth low utlty values. We refer to ths type of load sheddng as utlty based load sheddng, also referred to as semantc load sheddng n the lterature. The work n the second category ams at maxmzng the number of output tuples produced [9, 4, ]. Ths can be acheved through rate reducton on the source streams,.e., droppng tuples from the nput streams, as suggested n [6, 4]. The work presented n [4] nvestgates algorthms for evaluatng movng wndow jons over pars of unbounded streams. Although the man focus of [4] s not on load sheddng, scenaros where system resources are nsuffcent to keep up wth the nput streams are also consdered.

pdf pdf There are several other works related wth load sheddng n DSMSs n general, ncludng memory allocaton among query operators [3] or nter-operator queues [9], load sheddng for aggregaton queres [4], and overload-senstve management of archved streams [8]. In summary, most of the exstng technques used for sheddng load are tuple droppng for CPU-lmted scenaros and memory allocaton among wndows for memory-lmted scenaros. However, droppng tuples from the nput streams wthout payng attenton to the selectvty of such tuples may result n a suboptmal soluton. Based on ths observaton, heurstcs that take nto account selectvty of the tuples are proposed n [9]. A dfferent approach, called age- tuple drop tme tuple drop tme based load sheddng, s proposed recently n [] for performng memory-lmted tme n wndow w tme n wndow w stream jons. Ths case I case II work s based on Fgure : Examples of match the observaton that probablty densty functons there exsts a tmebased correlaton between the streams. Concretely, the probablty of havng a match between a tuple just receved from one stream and a tuple resdng n the wndow of the opposte stream, may change based on the dfference between the tmestamps of the tuples (assumng tmestamps are assgned based on the arrval tmes of the tuples at the query engne). Under ths observaton, memory s conserved by keepng a tuple n the wndow snce ts recepton untl the average rate of output tuples generated usng ths tuple reaches ts maxmum value. For nstance, n Fgure case I, the tuples can be kept n the wndow untl they reach the vertcal lne marked. Ths effectvely cuts down the memory needed to store the tuples wthn the wndow and yet produces an output close to the actual output wthout wndow reducton. Obvously, knowng the dstrbuton of the ncomng streams has ts peak at the begnnng of the wndow, the agebased wndow reducton can be effectve for sheddng memory load. A natural queston to ask s: Can the age-based wndow reducton approach of [] be used to shed CPU load? Ths s a vald queston, because reducng the wndow sze also decreases the number of comparsons that have to be made n order to evaluate the jon. However, as llustrated n Fgure case II, ths technque cannot drectly extend to the CPUlmted case where the memory s not the constrant. When the dstrbuton does not have ts peak close to the begnnng of the wndow, the wndow reducton approach has to keep tuples untl they are close to the end of the wndow. As a result, tuples that are close to the begnnng of the wndow and thus are not contrbutng much to the output wll be processed untl the peak s reached close to the end of the wndow. Ths observaton ponts out two mportant facts. Frst, tme-based correlaton between the wndowed streams can play an mportant role n load sheddng. Second, the wndow reducton technque that s effectve for utlzng tme-based correlaton to shed memory load s not sutable for CPU load sheddng, especally when the dstrbuton of the ncomng streams s unknown or unpredctable. Wth the above analyss n mnd, we propose an adaptve load sheddng framework that s capable of performng selectve processng of tuples n the stream wndows by dynamc adaptaton to nput stream rates, tme-based correlatons between the streams, and proftablty of dfferent jon drectons. To our knowledge, our load sheddng approach s the only one that can handle arbtrary tme correlatons and at the same tme can support maxmzaton of output utlty. 3. OVERVIEW Unlke the conventonal load sheddng approach of droppng tuples from the nput streams, our adaptve load sheddng framework encourages stream tuples to be kept n the wndows and sheds the CPU load by performng the stream jons on a dynamcally changng subset of tuples that are hghly benefcal, nstead of on the entre set of tuples stored wthn the wndows. Ths allows us to explot the characterstcs of stream applcatons that exhbt tme based correlaton between the streams. Concretely, we assume that there exsts a non-flat dstrbuton of probablty of match between a newlyreceved tuple and the other tuples n the opposte wndow, dependng on the dfference between the tmestamps of the tuples. There are several reasons behnd ths assumpton. Frst, varable delays can exst between the streams as a result of dfferences between the communcaton overhead of recevng tuples from dfferent sources []. Second and more mportantly, there may exst varable delays between related events from dfferent sources. For nstance, n the news jon example, dfferent news agences are expected to have dfferent reacton tmes due to dfferences n ther news collecton and publshng processes. In the stock tradng example, there wll be a tme delay between the phone call contanng the hnt and the acton of buyng the hnted stock. In the correlated attacks example, dfferent parts of the network may have been attacked at dfferent tmes. Note that, the effects of tme correlaton on the data stream jons are to some extent analogous to the effects of the tme of data creaton n data warehouses, whch are exploted by jon algorthms such as Drag-Jon [3]. Although our load sheddng framework s based on the assumpton that the memory resource s suffcent, we want to pont out two mportant observatons. Frst, wth ncreasng nput stream rates and larger stream wndow szes, t s qute common that the processng resources (CPU) become lmted before memory does. Second, t s nterestng to note that, even under lmted memory, our adaptve load sheddng approachcanbeusedtoeffectvelyshedtheexcessvecpuload after wndow reducton s performed for handlng the memory constrants. 3. Techncal Hghlghts Our load sheddng approach s best understood through ts two core mechansms, each answers a fundamental queston on adaptve load sheddng wthout tuple droppng. The frst mechansm s called partal processng and t answers the queston of how much we can process gvenawndowof stream tuples. The factors to be consdered n answerng ths queston nclude the performance of the stream jon operaton under current system load and the current ncomng stream rates. In partcular, partal processng dynamcally adjusts the amount of load sheddng to be performed through rate adaptaton. The second mechansm s called selectve processng and t answers the queston of what should we process gven the constrant on the amount of processng, defned at the partal processng phase. The factors that nfluence the answer to ths queston nclude the characterstcs of stream

wndow segments, the proftablty of jon drectons, and the utlty of dfferent stream tuples. Selectve processng extends partal processng to ntellgently select the tuples to be used durng jon processng under heavy system load, wth the goal of maxmzng the output rate or the output utlty of the stream jon. Notaton Meanng t tuple T (t) tmestamp of the tuple t S nput stream W wndow over S w wndow sze of W n seconds λ rate of S n tuples per second B,j basc wndow j n W b basc wndow sze n seconds n number of basc wndows n W r fracton parameter δ r fracton boost factor r fracton parameter for W r,z fracton parameter for W for a tuple of type z f (t) match probablty densty functon for W p,j probablty of match for B,j expected output from comparng o,j a tuple t wth a tuple n B,j s j k, whereo,k s the jth tem n the sorted lst {o,l l [..n ]} expected utlty from comparng u,z a tuple t of type z wth a tuple n W Z tuple type doman Z(t) type of a tuple V(z) utlty of a tuple of type z ω,z frequency of a tuple of type z n S T r rate adaptaton perod T c tme correlaton adaptaton perod γ samplng probablty Table : Notatons used throughout the paper Before descrbng the detals of partal processng and selectve processng, we frst brefly revew the basc concepts nvolved n processng wndowed stream jons, and establsh the notatons that wll be used throughout the paper. κ S output expred σ σ υ υ W W W W α α τ S κ κ κ υ S σ α τ τ S Fgure : Stream Jon Example 3. Basc Concepts and Notatons A two-way wndowed stream jon operaton takes two nput streams denoted as S and S, performs the stream jon and generates the output. For notatonal convenence, we denote the opposte stream of stream ( =, ) as stream. The sldng wndow defned over stream S s denoted as W,and has sze w n terms of seconds. We denote a tuple as t and ts arrval tmestamp as T (t). Other notatons wll be ntroduced n the rest of the paper as needed. Table summarzes the notatons used throughout the paper. A wndowed stream jon s performed by fetchng tuples from the nput streams and processng them aganst tuples n the opposte wndow. Fgure llustrates the process of wndowed stream jons. For a newly fetched tuple t from stream S, the jon s performed n the followng three steps. Frst, tuple t s nserted nto the begnnng of wndow W. Second, tuples at the end of wndow W are checked n order and removed f they have expred. A tuple t o expres from wndow W ff T T (t o) >w,wheret represents the current tme. The expraton check stops when an unexpred tuple s encountered. The tuples n wndow W are sorted n the order of ther arrval tmestamps by default and the wndow s managed as a doubly lnked lst for effcently performng nserton and expraton operatons. In the thrd and last step, tuple t s processed aganst tuples n the wndow W, and matchng tuples are generated as output. Fgure 3 summarzes the Jon Processng() jon processng steps. To for =to handle jons defned on set f no tuple n S or weghted set valued attrbutes, the followng add- contnue t fetch tuple from S Insert t n front of W tonal detals are attached repeat to the processng steps, assumng t o last tuple n W a tuple s a set of tems (possbly wth assgned f T T (t o) >w Remove t o from W weghts). Frst, the untl T T (t o) w Sort tems n t tems n tuple t are sorted foreach t a W as t s fetched from S. Merge compare t, t a The tuples n W are expected to be sorted, snce they have gone through the Fgure 3: Jon Processng same step when they were fetched from S. Then, for each tuple t a n W, t and t a are compared by performng a smple merge of ther sorted tems. Equalty, subset, superset, overlap and nner product jons all can be processed n a smlar manner. For ndexed jons, an nverted ndex s used to effcently perform the jon wthout gong through all the tuples n W. We dscuss the detals of ndexed jon n Secton 4.. 4. PARTIAL PROCESSING - HOW MUCH WE CAN PROCESS? The frst step n our approach to sheddng CPU load wthout droppng tuples s to determne how much we can process gven the wndows of stream tuples that partcpate n the jon. We call ths step the partal processng based load sheddng. For nstance, consder a scenaro n whch the lmtaton n processng power requres to drop half of the tuples,.e. decreasng the nput rate of the streams by half. A partal processng approach s to allow every tuple to enter nto the wndows, but to decrease the cost of jon processng by comparng a newly-fetched tuple wth only a fracton of the wndow defned on the opposte stream. Partal processng, by tself, does not sgnfcantly ncrease the number of output tuples produced by the jon operator, when compared to tuple droppng or wndow reducton approaches. However, as we wll descrbe later n the paper, t forms a bass to perform selectve processng, whch explots the tme-based correlaton between the streams, and makes t possble to accommodate utlty-based load sheddng, n order to maxmze the output rate or the utlty of the output tuples produced. Two mportant factors are consdered n determnng the amount of partal processng: () the current ncomng stream rates, and () the performance of the stream jon operaton under cur-

Algorthm : Rate Adaptaton RateAdapt() () Intally: r () every T r seconds (3) α # of tuples fetched from S snce last adapt. (4) α # of tuples fetched from S snce last adapt. (5) λ average rate of S snce last adaptaton (6) λ average rate of S snce last adaptaton (7) β α +α (λ +λ ) T r (8) f β< then r β r (9) else r mn(,δ r r) rent system load. Partal processng employs rate adaptaton to adjust the amount of processng performed dynamcally. The performance of the stream jon under the current system load s a crtcal factor and t s nfluenced by the concrete jon algorthm and optmzatons used for performng jon operatons. In the rest of ths secton, we frst descrbe rate adaptaton, then dscuss the detals of utlzng nverted ndexes for effcent jon processng. Fnally we descrbe how to employ rate adaptaton n conjuncton wth ndexed jon processng. 4. Rate Adaptaton The partal processng-based load sheddng s performed by adaptng to the rates of the nput streams. Ths s done by observng the tuple consumpton rate of the jon operaton and comparng t to the nput rates of the streams to determne the fracton of the wndows to be processed. Ths adaptaton s performed perodcally, at every T r seconds. T r s called the adaptaton perod. We denote the fracton parameter as r, whch defnes the rato of the wndows to be processed. In other words, the settng of r answers the queston of how much load we should shed. Algorthm gves a sketch of the rate adaptaton process. Intally, the fracton parameter r s set to. Every T r seconds, the average rates of the nput streams S and S are determned as λ and λ. Smlarly, the number of tuples fetched from streams S and S snce the last adaptaton step are determned as α and α. Tuples from the nput streams may not be fetched at the rate they arrve due to an napproprate ntal value of the parameter r or due to a change n the stream rates snce the last adaptaton step. As a result, β = α +α (λ +λ ) T r determnes the percentage of the nput tuples fetched by the jon algorthm. Based on the value of β, the fracton parameter r s readjusted at the end of each adaptaton step. If β s smaller than, r s multpled by β, wth the assumpton that comparng a tuple wth the other tuples n the opposte wndow has the domnatng cost n jon processng. Otherwse, the jon s able to process all the ncomng tuples wth the current value of r. In ths case, the r value s set to mn(,δ r r), where δ r s called the fracton boost factor. Thssamedatncreasngthefractonofthe wndows processed, optmstcally assumng that addtonal processng power s avalable. If not, the parameter r wll be decreased durng the next adaptaton step. Hgher values of the fracton boost factor result n beng more aggressve at ncreasng the parameter r. The adaptaton perod T r should be small enough to adapt to the bursty nature of the streams, but large enough not to cause overhead and undermne the jon processng. 4. Indexed Jon and Partal Processng Stream ndexng [, 5] can be used to cope up wth the hgh processng cost of the jon operaton, reducng the amount of load sheddng performed. However, there are two mportant ponts to be resolved before ndexng can be employed together wth partal processng and thus wth other algorthms we ntroduce n the followng sectons. The frst ssue s that, n a streamng scenaro the ndex has to be mantaned dynamcally (through nsertons and removals) as the tuples enter and leave the wndow. Ths means that the assumpton made n Secton 4. about fndng matchng tuples wthn a wndow (ndex search cost) beng the domnant cost n the jon processng, no more holds. Second, the ndex does not naturally allow to process only a certan porton of the wndow. We resolve these ssues n the context of nverted ndexes, that are predomnantly used for jons based on set or weghted set-valued attrbutes. Here, we frst gve a bref overvew of nverted ndexes and then descrbe the modfcatons requred to use them n conjuncton wth our load sheddng algorthms. 4.. Inverted Indexes An nverted ndex conssts of a collecton of sorted dentfer lsts. In order to nsert a set nto the ndex, for each tem n the set, the unque dentfer of the set s nserted nto the dentfer lst assocated wth that partcular tem. Smlar to nserton, removal of a set from the ndex requres to fnd the dentfer lsts assocated wth the tems n the set. The removal s performed by removng the dentfer of the set from these dentfer lsts. In our context, the nverted ndex s mantaned as an n-memory data structure. The collecton of dentfer lsts are managed n a hashtable. The hashtable s used to effcently fnd the dentfer lst assocated wth an tem. The dentfer lsts are nternally organzed as sorted (based on unque set dentfers) balanced bnary trees to facltate both fast nserton and removal. The set dentfers are n fact ponters to the tuples they represent. Query processng on an nverted ndex follows a mult-way mergng process, whch s usually accelerated through the use of a heap. Same type of processng s used for all dfferent types of queres we have mentoned so far. Specfcally, gven a query set, the dentfer lsts correspondng to tems n the query set are retreved usng the hashtable. These sorted dentfer lsts are then merged. Ths s done by nsertng the fronters of the lsts nto a mn heap and teratvely removng the topmost set dentfer from the heap and replacng t wth the next set dentfer (new fronter) n ts lst. Durng ths process, the dentfer of an ndexed set, sharng k tems wth the query set, wll be pcked from the heap k consecutve tmes, makng t possble to process relatvely complex overlap and nner product queres effcently [8]. 4.. Tme Ordered Identfer Lsts Although the usage of nverted ndexes speeds up the processng of jons based on set-valued attrbutes, t also ntroduces sgnfcant nserton and deleton costs. Ths problem can be allevated by explotng the tmestamps of the tuples that are beng ndexed and the fact that these tuples are receved n tmestamp order from the nput streams. In partcular, nstead of mantanng dentfer lsts as balanced trees sorted on dentfers, we can mantan them as lnked lsts sorted on tmestamps of the tuples (sets). Ths does not effect the mergng phase of the ndexed search, snce a tmestamp unquely dentfes a tuple n a stream unless dfferent tuples For weghted sets, the weghts should also be stored wthn the dentfer lsts, n order to answer nner product queres.

wth equal tmestamps are allowed. In order to handle the latter, the dentfer lsts can be sorted based on (tmestamp, dentfer) pars. Ths requres very small reorderng, as the event of recevng dfferent tuples wth equal tmestamps s expected to happen very nfrequently, f t happens at all. Usng tmestamp ordered dentfer lsts has the followng three advantages:. It allows to nsert a set dentfer nto an dentfer lst n constant tme, as opposed to logarthmc tme wth dentfer sorted lsts.. It facltates pggybackng of removal operatons on nserton and search operatons, by checkng for expred tuples at the end of dentfer lsts at nserton and search tme. Thus, the removal operaton s performed n amortzed constant tme as opposed to logarthmc tme wth dentfer sorted lsts. 3. Tmestamp sorted dentfer lsts make t possble to end the mergng process, used for search operatons, at a specfed tme wthn the wndow, thus enablng tme based partal processng. Ths concludes our dscusson of ndexed jon detals. 5. SELECTIVE PROCESSING - WHAT SHOULD WE PROCESS? Selectve processng extends partal processng to ntellgently select the tuples to be used durng jon processng under heavy system load. Gven the constrant on the amount of processng defned at the partal processng phase, the selectve processng ams at maxmzng the output rate or the output utlty of the stream jons. Three mportant factors are used to determne what we should select for jon processng: () the characterstcs of stream wndow segments, () the proftablty of jon drectons, and (3) the utlty of dfferent stream tuples. We frst descrbe tme correlaton adaptaton and jon drecton adaptaton, whch form the core of our selectve processng approach. Then we dscuss utlty-based load sheddng. The man deas behnd tme correlaton adaptaton and jon drecton adaptaton are to prortze segments of the wndows n order to process parts that wll yeld hgher output (tme correlaton adaptaton) and to start load sheddng from one of the wndows f one drecton of the jon s producng more output than the other (jon drecton adaptaton). 5. Tme Correlaton Adaptaton For the purpose of tme correlaton adaptaton, we dvde the wndows of the jon nto basc wndows. Concretely,wndow W s dvded nto n basc wndows of sze b seconds each, where n =+ w /b. B,j denotes the jth basc wndow n W, j [..n ]. Tuples do not move from one basc wndow to another. As a result, tuples leave the jon operator one basc wndow at a tme and the basc wndows slde dscretely b seconds at a tme. The newly fetched tuples are nserted nto the frst basc wndow. When the frst basc wndow s full, meanng that the newly fetched tuple has a tmestamp that s at least b seconds larger than the oldest tuple n the frst basc wndow, the last basc wndow s empted and all the basc wndows are shfted, last basc wndow becomng the frst. The newly fetched tuples can now flow nto the new frst basc wndow, whch s empty. The basc wndows are managed n a crcular buffer, so that the shft of wndows s a Algorthm : Tme Correlaton Adaptaton TmeCorrelatonAdapt() () every T c seconds () for =to (3) sort n desc. order {ô,j j [..n ]} nto array O (4) for j =to n (5) o,j ô,j γ r b λ λ T c (6) s j k, whereo[j] =ô,k (7) for j =to n (8) ô,j Algorthm 3: Tuple Processng and Tme Correlaton ProcessTuple() () when processng tuple t aganst wndow W () f rand < r γ (3) process t aganst all tuples n B,j, j [..n ] (4) foreach match n B,j, j [..n ] (5) ô,j ô,j + (6) else (7) a r W (8) for j =to n (9) a a B,s j () f a> () process t aganst all tuples n B j,s () else (3) r e + a B,s j (4) process t aganst r e fracton of tuples n B,s j (5) break constant tme operaton. The basc wndows themselves can be organzed as ether lnked lsts (f no ndexng s used) or as nverted ndexes (f ndexng s used). Tme correlaton adaptaton s perodcally performed at every T c seconds. T c s called the tme correlaton adaptaton perod. Durng the tme between two consecutve adaptaton steps, the jon operaton performs two types of processng. For a newly fetched tuple, t ether performs selectve processng or full processng. Selectve processng s carred out by lookng for matches wth tuples n hgh prorty basc wndows of the opposte wndow, where the number of basc wndows used depends on the amount of load sheddng to be performed. Full processng s done by comparng the newly fetched tuple aganst all the tuples from the opposte wndow. The am of full processng s to collect statstcs about the usefulness of the basc wndows for the jon operaton. The detals of the adaptaton step and full processng are gven n Algorthm and n lnes -5 of Algorthm 3. Full processng s only done for a sampled subset of the stream, basedonaparametercalledsamplng probablty, denoted as γ. A newly fetched tuple goes through selectve processng wth probablty r γ. In other words, t goes through full processng wth probablty r γ. The fracton parameter r s used to scale the samplng probablty, so that the full processng does not consume all processng resources when the load on the system s hgh. The goal of full processng s to calculate for each basc wndow B,j, the expected number of output tuples produced from comparng a newly fetched tuple t wth a tuple n B,j, denoted as o,j. These values are used later durng the adaptaton step to prortze wndows. In partcular, o,j values are used to calculate s j values. We have s j = k, where o,k s the jth tem n the sorted lst {o,l l [..n ]}. Ths means that B,s s the hghest prorty basc wndow n W, B,s s the next, and so on. Lnes 7-4 n Algorthm 3 gve a sketch of selectve processng. Durng selectve processng, s j values are used to

gude the load sheddng. Concretely, n order to process a newly fetched tuple t aganst wndow W,frstthenumberof tuples from wndow W, that are gong to be consdered for processng, s determned by calculatng r W, where W denotes the number of tuples n the wndow. The fracton parameter r s determned by rate adaptaton as descrbed n Secton 4.. Then, tuple t s processed aganst basc wndows, startng from the hghest prorty one,.e. B,s, gong n decreasng order of prorty. A basc wndow B j,s s searched for matches completely, f addng B j,s number of tuples to the number of tuples used so far from wndow W to process tuple t does not exceeds r W. Otherwse an approprate fracton of the basc wndow s used and the processng s completed for tuple t. 5.. Impact of Basc Wndow Sze The settng of basc wndow sze parameter b nvolves tradeoffs. Smaller values are better to capture the peak of the match probablty dstrbuton, whle they also ntroduce overhead n processng. For nstance, recallng Secton 4.., n an ndexed jon operaton, the dentfer lsts have to be looked up for each basc wndow. Although the lsts themselves are shorter and the total mergng cost does not ncrease wth smaller basc wndows, the cost of lookng up the dentfer lsts from the hashtables ncreases wth ncreasng number of basc wndows, n. Here we analyze how well the match probablty dstrbuton, whch s dependent on the tme correlaton between the streams, s utlzed for a gven value of the basc wndow sze parameter b, under a gven load condton. We use r to denote the fracton of tuples n jon wndows that can be used for processng tuples. Thus, r s used to model the current load of the system. We assume that r cangoover,nwhch case abundant processng power s avalable. We use f (t) to denote the match probablty dstrbuton functon for wndow W. Note that, due to dscrete movement of basc wndows, a basc wndow covers a tme varyng area under the match probablty dstrbuton functon. Ths area, denoted as p,j for basc wndow B,j, can be calculated by observng that the basc wndow B,j covers the area over the nterval [max(,x b +(j ) b),mn(w,x b +(j ) b)] on the tme axs ([,w ]), when only x [, ] fracton of the frst basc wndow s full. Then, we have: p,j = x= mn(w,x b+(j ) b) t=max(,x b+(j ) b) f (t) dt dx For the followng dscusson, we overload the notaton s j, such that s j = k, wherep,k s the jth tem n the sorted lst {p,l l [..n ]}. The number of basc wndows whose tuples are all consdered for processng s denoted as c e. The fracton of tuples n the last basc wndow used, that are consdered for processng, s denoted as c p. c p s zero f the last used basc wndow s completely processed. We have: c e = mn(n, r w /b ) c p = { r w c e b b c e <n otherwse Then the area under f that represents the porton of wndow W processed, denoted as p u, can be calculated as: p u c p p s ce+ + c e j= p,s j Let us defne g(f,a) as the maxmum area under the functon f wth a total extent of a on the tme axs. Then we can calculate the optmalty of p u, denoted as φ, as follows: φ = p u g(f,w mn(,r )) When φ =, the jon processng s optmal wth respect to output rate (gnorng the overhead of small basc wndows). Otherwse, the expected output rate s φ tmes the optmal value, under current load condtons (r ) and basc wndow sze settng (b). Fgure 4 plots φ (on z-axs) as a functon of b/w (on x-axs) and r (on y-axs) for two dfferent match probablty dstrbutons, the bottom one beng more skewed. We make the followng three observatons from the fgure: Decreasng avalablty of computatonal resources negatvely nfluences the optmalty of the jon for a fxed basc wndow sze. The ncreasng skewness n the match probablty dstrbuton decreases the optmalty of the jon for a fxed basc wndow sze. Smaller basc wndows szes provde better jon optmalty, when the avalable computatonal resources are low or the match probablty dstrbuton s skewed..8.6.4....3 b/w.4.5 r.5 φ φ.5...3.4.5 b/w r.5.5.5 densty of match probablty densty of match probablty 4 3.5 3.5.5.5..4.6.8 (T T(t)) / w 8 6 4..4.6.8 (T T(t)) / w Fgure 4: Optmalty of the jon for dfferent loads and basc wndow szes under two dfferent match probablty dstrbuton functons As a result, small basc wndow szes are favorable for skewed probablty match dstrbutons and heavy load condtons. We report our expermental study on the effect of overhead, stemmng from managng large number of basc wndows, on the output rate of the jon operaton n Secton 6. 5. Jon Drecton Adaptaton Due to tme based correlaton between the streams, a newly fetched tuple from stream S may match wth a tuple from stream S that has already made ts way nto the mddle portons of wndow W. Ths means that, most of the tme, a newly fetched tuple from stream S has to stay wthn the wndow W for some tme, before t can be matched wth a tuple from stream S. Ths mples that, one drecton of the jon processng may be of lesser value, n terms of the number of output tuples produced, than the other drecton. For nstance, n the runnng example, processng a newly fetched tuple t from stream S aganst wndow W wll produce smaller number of output tuples when compared to the other way

around, as the tuples to match t has not yet arrved at wndow W. In ths case, symmetry of the jon operaton can be broken durng load sheddng, n order to acheve a hgher output rate. Ths can be acheved by decreasng the fracton of tuples processed from wndow W frst, and from W later (f needed). We call ths jon drecton adaptaton. Jon drecton adaptaton s performed mmedately after rate adaptaton. Specfcally, two dfferent fracton parameters are defned, denoted as r for wndow W, {, }. Durng jon processng, r fracton of the tuples n wndow W are consdered, makng t possble to adjust jon drecton by changng r and r. Ths requres to replace r wth r n lne 7 of Algorthm 3 and lne 5 of Algorthm. The constrant n settng of r values s that, the number of tuple comparsons performed per tme unt should stay the same when compared to the case where there s a sngle r value as computed by Algorthm. The number of tuple comparsons performed per tme unt s gven by = (r λ (λ w)), snce the number of tuples n wndow W s λ w. Thus, we should have = (r λ (λ w)) = = (r λ (λ w)),.e.: r (w + w ) = r w + r w The valuable drecton of the jon can be determned by comparng the expected number of output tuples produced from comparng a newly fetched tuple wth a tuple n W, denoted as o, for = and. Ths can be computed as o = n n j= we can set r = mn(,r w +w o,j. Assumng o >o, wthout loss of generalty, w ). Ths maxmzes r, whle respectng the above constrant. The generc procedure to set r and r s gven n Algorthm 4. Jon drecton adaptaton, as t s descrbed n ths secton, assumes that any porton of one of the wndows s more valuable than all portons of the other wndow. Ths may not be the case for applcatons where both match probablty dstrbuton functons, f (t)andf (t), are non-flat. For nstance, n a traffc applcaton scenaro, a two way traffc flow between two ponts mples both drectons of the jon are valuable. We ntroduce a more advanced jon drecton adaptaton algorthm, that can handle such cases, n the next subsecton as part of utlty-based load sheddng. 5.3 Utlty-based Load Sheddng So far, we have targeted our load sheddng algorthms toward maxmzng the number of tuples produced by the jon operaton, a commonly used metrc n the lterature [9, ]. Utlty-based load sheddng, also called semantc load sheddng [3], s another metrc employed for gudng load sheddng. It has the beneft of beng able to dstngush hgh utlty output from output contanng large number of tuples. In the context of jon operatons, utlty-based load sheddng promotes output that results from matchng tuples of hgher mportance/utlty. In ths secton, we descrbe how Algorthm 4: Jon Drecton Adaptaton JonDrectonAdapt() () Intally: r,r () upon completon of RateAdapt() call (3) o n n j= o,j (4) o n n j= o,j (5) f o o then r mn(,r w +w ) w (6) else r max(,r w +w w ) w w (7) r r w +w w r w w utlty-based load sheddng s ntegrated nto the mechansm descrbed untl now. We assume that each tuple has an assocated mportance level, defned by the type of the tuple, and specfed by the utlty value attached to that type. We denote the tuple type doman as Z, type of a tuple t as Z(t), and utlty of a tuple t, where Z(t) = z Z, as V(z). Type domans and ther assocated utlty values can be set based on applcaton needs. In the rest of the paper, the utlty value of an output tuple of the the jon operaton that s obtaned by matchng tuples t a and t b, s assumed to contrbute a utlty value of max (V(Z(t a)), V(Z(t b ))) to the output. Our approach can also accommodate other functons, lke average (.5 (V(Z(t a)) + V(Z(t b )))). We denote the frequency of appearance of a tuple of type z n stream S as ω,z, where z Z ω,z =. The man dea behnd utlty-based load sheddng s to use a dfferent fracton parameter for each dfferent type of tuple fetched from a dfferent stream, denoted as r,z, wherez Z and {, }. The motvaton behnd ths s to do less load sheddng for tuples that provde hgher output utlty. The extra work done for such tuples s compensated by dong more load sheddng for tuples that provde lower output utlty. The expected output utlty obtaned from comparng a tuple t of type z wth a tuple n wndow W s denoted as u,z, ands used to determne r,z values. In order to formalze ths problem, we extend some of the notaton from Secton 5... The number of basc wndows from W whose tuples are all consdered for processng aganst a tuple of type z, s denoted as c e(, z). The fracton of tuples n the last basc wndow used from W, that are consdered for processng, s denoted as c p(, z). c p(, z) szerofthelast used basc wndow s completely processed. Thus, we have: c e(, z) = n r,z c p(, z) = n r,z c e(, z) Then, the area under f that represents the porton of wndow W processed for a tuple of type z, denoted as p u(, z), can be calculated as follows: p(, z) c p(, z) p,s ce(,z)+ c e(,z) + p j,s j= Wth these defntons, the maxmzaton of the output utlty can be defned formally as ( max λ (λ w ) ( ω,z u,z p(, z) )) = z Z subject to the processng constrant: ( r (w + w )= w ( ) ) ω,z r,z = z Z The r value used here s computed by Algorthm, as part of rate adaptaton. Although the formulaton s complex, ths s ndeed a fractonal knapsack problem and has a greedy optmal soluton. Ths problem can be reformulated as follows: Consder I,j,z as an tem that represents processng of a tuple of type z aganst basc wndow B,j. Item I,j,z has a volume of λ λ ω,z b unts (whch s the number of comparsons made per tme unt to process ncomng tuples of type z aganst tuples n B,j) and a value of assumng that some bufferng s performed outsde the jon

Algorthm 5: Jon Drecton Adapt, Utlty-based Sheddng VJonDrectonAdapt() () upon completon of RateAdapt() call () heap: H (3) for =to (4) foreach z Z (5) r,z (6) v,s,z u,z n o,s / n k= o,k (7) Intalze H wth {v,s,z [..],z Z} (8) a λ λ r (w + w ) (9) whle H s not empty () use, j, z s.t. v,j,z = topmost tem n H () pop the frst tem from H () a a ω,z λ λ b (3) f a> (4) r,z r,z + n (5) else (6) r e + a λ λ ω,z b (7) r,z r,z + re n (8) return (9) f j<n () v,s j+ () nsert v,s j+,z u,z n o j+,s,z nto H / n k= o,k λ λ ω,z b u,z p j,s n unts (whch s the utlty ganed per tme unt, from comparng ncomng tuples of type z wth tuples n B,j). The am s to pck maxmum number of tems, where fractonal tems are acceptable, so that the total value s maxmzed and the total volume of the pcked tems s at most λ λ r (w + w ). r,j,z [, ] s used to denote how much of tem I,j,z s pcked. Note that the number of unknown varables here (r,j,z s) s (n + n ) Z,and the soluton of the orgnal problem can be calculated from these varables as, r,z = n j= r,j,z. The values of the fracton varables are determned durng jon drecton adaptaton. A smple way to do ths, s to sort the tems based on ther value over volume ratos, v,j,z = u,z p j,s n (note that o n,j/ k= o,k can be used as an estmate of p j,s ), and to pck as much as possble of the tem that s most valuable per unt volume. However, snce the number of tems s large, the sort step s costly, especally for large number of basc wndows and large szed domans. A more effcent soluton, wth worst case complexty O( Z +(n + n ) log Z ), s descrbed n Algorthm 5, whch replaces Algorthm 4. Algorthm 5 makes use of the s j values that defne an order between value over volume ratos of tems for a fxed type z and wndow W. The algorthm keeps the tems representng dfferent streams and types wth the hghest value over volume ratos ( Z of them), n a heap. It teratvely pcks an tem from the heap and replaces t wth the tem havng the next hghest value over volume rato wth the same stream and type subscrpt ndex. Ths process contnues untl the capacty constrant s reached. Durng ths process r,z values are calculated progressvely. If the tem pcked represents wndow W and type z, thenr,z s ncremented by /n unless the tem s pcked fractonally, n whch case the ncrement on r,z s adjusted accordngly. 6. EXPERIMENTS We report four sets of expermental results to demonstrate effectveness of the algorthms ntroduced n ths paper. The frst set llustrates the need for sheddng CPU load for both ndexed (usng nverted ndexes) and non-ndexed jons. The second set demonstrates the performance provded by the partal processng-based load sheddng step keepng tuples wthn wndows and sheddng excessve load by partally processng the jon through rate adaptaton. The thrd set shows the performance gan n terms of output rate for selectve processng, whch ncorporates tme correlaton adaptaton and jon drecton adaptaton. The effect of basc wndow sze on the performance s also nvestgated expermentally. The fourth set of experments presents results on the utltybased load sheddng mechansms ntroduced and ther ablty to maxmze output utlty under dfferent workloads. 6. Expermental Setup The jon operaton s mplemented as a Java package, named ssjon.*, and s customzable wth respect to supported features, such as rate adaptaton, tme correlaton adaptaton, jon drecton adaptaton, and utlty-based load sheddng, as well as varous parameters assocated wth these features. Streams used n the experments reported n ths secton are tmestamp ordered tuples, where each tuple ncludes a sngle attrbute, that can ether be a set, or weghted set. The sets are composed of varable number of tems, where each tem s an nteger n the range [..L]. L s taken as n the experments. Number of tems contaned n sets follow a normal dstrbuton wth mean µ and standard devaton σ. In the experments, µ s taken as 5 and σ s taken as. The popularty of tems n terms of how frequent they occur n a set, follows a Zpf dstrbuton wth parameter κ. The tme based correlaton between streams s modeled usng two parameters, tme shft parameter denoted as τ and cycle perod parameter denoted as ς. Cycle perod s used to change the popularty ranks of tems as a functon of tme. Intally at tme, the most popular tem s, the next, and so on. Later at tme T, the most popular tem s a =+ L T mod ς, the next ς a +, and so on. Tme shft s used to ntroduce a delay between matchng tems from dfferent streams. Applyng a tme shft of τ to one of the streams means that the most popular (T τ) modς ς tem s a =+ L at tme T, for that stream. Fgure 5 shows the resultng probablty of match dstrbuton f, when a tme delay of τ = 5 ς s 8.4.35.3.5..5..5.4.35.3.5..5..5 3 4 5 6 7 8 9 3 4 5 6 7 8 9 Fgure 5: Probablty match dstrbutons, κ =.6 and κ =.8 appled to stream S and ς = w, wherew = w = w. The two hstograms represent two dfferent scenaros, n whch κ s taken as.6 and.8, respectvely. These settngs for τ and ς parameters are also used n the rest of the experments, unless otherwse stated. We change the value of parameter κ to model varyng amounts of skewness n match probablty dstrbutons. Experments are performed usng tme varyng stream rates and varous wndow szes. The default settngs of some of the system parameters are as follows: T r = 5 seconds, T c = 5 seconds, δ r =.,γ =.. We report results from overlap jon operatons. Other types of jons show smlar results. The experments are performed on an IBM PC wth 5MB man memory and.4ghz Intel Pentum 4 processor, usng Sun JDK.4.. 6. Processng Power Lmtaton