Warehousing and Analyzing Massive RFID Data Sets

Size: px
Start display at page:

Download "Warehousing and Analyzing Massive RFID Data Sets"

Transcription

1 Warehousing and Analyzing Massive RFID Data Sets Hector Gonzalez Jiawei Han Xiaolei Li Diego Klabjan University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA {hagonzal, hanj, xli10, Abstract Radio Frequency Identification (RFID) applications are set to play an essential role in object tracking and supply chain management systems. In the near future, it is expected that every major retailer will use RFID systems to track the movement of products from suppliers to warehouses, store backrooms and eventually to points of sale. The volume of information generated by such systems can be enormous as each individual item (a pallet, a case, or an SKU) will leave a trail of data as it moves through different locations. As a departure from the traditional data cube, we propose a new warehousing model that preserves object transitions while providing significant compression and path-dependent aggregates, based on the following observations: (1) items usually move together in large groups through early stages in the system (e.g., distribution centers) and only in later stages (e.g., stores) do they move in smaller groups, and (2) although RFID data is registered at the primitive level, data analysis usually takes place at a higher abstraction level. Techniques for summarizing and indexing data, and methods for processing a variety of queries based on this framework are developed in this study. Our experiments demonstrate the utility and feasibility of our design, data structure, and algorithms. 1 Introduction Radio Frequency Identification (RFID) is a technology that allows a sensor (RFID reader) to read, from a distance and without line of sight, a unique identifier that is provided (via a radio signal) by an inexpensive tag attached to an item. RFID offers a possible alternative to bar code identification systems and it facilitates applications like item tracking and inventory management in the supply chain. The technology holds the promise to streamline supply chain The work was supported in part by the U.S. National Science Foundation NSF IIS /IIS Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. management, facilitate routing and distribution of products, and reduce costs by improving efficiency. Large retailers like Walmart, Target, and Albertsons have already begun implementing RFID systems in their warehouses and distribution centers, and are requiring their suppliers to tag products at the pallet and case levels. Individual tag prices are expected to fall from around 25 cents per unit to 5 cents per unit by At that price level, we can expect tags to be placed at the individual item level for many products. The main challenge then becomes how can companies handle and interpret the enormous volume of data that an RFID application will generate. Venture Development Corporation [11], a research firm, predicts that when tags are used at the item level, Walmart will generate around 7 terabytes of data every day. Database vendors like Oracle, IBM, Teradata, and some startups are starting to provide solutions to integrate RFID information into enterprise data warehouses. Example Suppose a retailer with 3,000 stores sells 10,000 items a day per store. Assume that we record each item movement with a tuple of the form: (EP C, location, time), where EPC is an Electronic Product Code which uniquely identifies each item 1. If each item leaves only 10 traces before leaving the store by going through different locations, this application will generate at least 300 million tuples per day. A manager may ask queries on the duration of paths like (Q 1 ): List the average shelf life of dairy products in 2003 by manufacturer, or on the structure of the paths like (Q 2 ): What is the average time that it took coffee-makers to move from the warehouse to the shelf and finally to the checkout counter in January of 2004? New data structures and algorithms need to be developed that may provide fast responses to such queries even in the presence of terabyte-sized data. Such enormous amount of low-level data and flexible high-level queries pose great challenges to traditional relational and data warehouse technologies since the processing may involve retrieval and reasoning over a large number of 1 We will use the terms EPC and RFID tag interchangeably throughout the paper 1

2 inter-related tuples through different stages of object movements. No matter how the objects are sorted and clustered, it is difficult to support various kinds of high-level queries in a uniform and efficient way. A nontrivial number of queries may even require a full scan of the entire RFID database. Model Proposal and Justification In this paper we propose a new RFID data warehouse model to compress and aggregate RFID data in an organized way such that a wide range of queries can be answered efficiently. Our design is based on the following key observations. First, we need to eliminate the redundancy present in RFID data. Each reader provides tuples of the form (EP C, location, time) at fixed time intervals. When an item stays at the same location, for a period of time, multiple tuples will be generated. We can group these tuples into a single one of the form (EP C, location, time in, time out). For example, if a supermarket has readers on each shelf that scan the items every minute, and items stay on the shelf on average for 1 day, we get a 1,440 to 1 reduction in size without loss of information. Second, items tend to move and stay together through different locations. For example, a pallet with 500 cases of CDs may arrive at the warehouse; from there cases of 50 CDs may move to the shelf; and from there packs of 5 CDs may move to the checkout counter. We can register a single stay tuple of the form (EP C list, location, time in, time out) for the CDs that arrive in the same pallet and stay together in the warehouse, and thus generate an 80% space saving. At this point we can also perform data cleaning to eliminate possible inconsistencies due to failed scans by the RFID readers. We can, for example, use the assumption that many RFID objects stay or move together, especially at the early stage of distribution, or use the historically most likely path for a given item, to infer or interpolate the missing locations. An alternative compression mechanism is to store a single transition record for the 50 CDs that move together from the warehouse to the shelf, that is to group transitions and not stays. The problem with transition compression is that it makes it difficult to answer queries about items at a given location or going through a series of locations. For example, if we get the query What is the average time that CDs stay at the shelf? we can directly get the information from the stay records with location = shelf, but if we have transition records, we need to find all the transition records with origin = shelf and the ones with destination = shelf, join them on EPC, and compute departure time arrival time. Third, we can gain further compression by reducing the size of the EPC lists in the stay records by grouping items that move to the same locations. For example, let us say we have a stay record for the 50 CDs that stayed together at the warehouse, and that the CDs moved in two groups to shelf and truck locations. We can replace the list of 50 EPCs in the stay record for just two generalized identifiers (gids) which in turn point to the concrete EPCs. In this example we will store a total of 50 EPCs, plus two gids, instead of 100 EPCs (50 in the warehouse, 25 in the shelf, 25 in the truck). In addition to the compression benefits, we can gain query processing speedup by assigning path-dependent names to the gids. In the CDs example we could name the gid for the warehouse 1, and the gid for the shelf 1.1 and truck 1.2. If we get the query What is the average time to go from the warehouse to the shelf for CDs? instead of intersecting the EPC lists for the stay records at each location we can directly look at the gid name and determine if the EPCs are linked. Fourth, most queries are likely to be at a high level of abstraction, and will only be interested in the low-level individual items if they are associated with some interesting patterns discovered at a high level. For example, query (Q 1 ) asks about dairy products by manufacturer. It is possible after seeing the results, that the user may ask subsequent queries and drill down to individual items. We can gain significant compression by creating the stay records not at the raw level but at a minimal level of abstraction shared by most applications, while keeping pointers to the RFID tags. This allows us to operate on a much smaller data set, fetching the original data only when absolutely necessary. The rest of the paper is organized as follows. Section 2 presents the structure of the input RFID data. Section 3 presents data compression and generalization methods important for the design of the RFID warehouse. Section 4 introduces algorithms for constructing the RFID warhouse. Section 5 develops methods for efficient processing of a variety of RFID queries. Section 6 reports the experimental and performance results. We discuss the related issues in Section 7 and conclude our study in Section 8. 2 RFID Data Data generated from an RFID application can be seen as a stream of RFID tuples of the form (EP C, location, time), where EPC is the unique identifier read by an RFID reader, location is the place where the RFID reader scanned the item, and time is the time when the reading took place. Tuples are usually stored according to a time sequence. A single EPC may have multiple readings at the same location, each reading is generated by the RFID reader scanning for tags at fixed time intervals or on a continuous basis. Table 1 is an example of a raw RFID database where a symbol starting with r represents an RFID tag, l a location, and t a time. The total number of records in this example is 188. In order to reduce the large amount of redundancy in the 2

3 Raw RF ID Records (r1,l1,t1) (r2,l1,t1) (r3,l1,t1) (r4,l1,t1) (r5,l1,t1) (r6,l1,t1) (r7,l1,t1)... (r1,l1,t9) (r2,l1,t9) (r3,l1,t9) (r4,l1,t9)... (r1,l1,t10) (r2,l1,t10) (r3,l1,t10) (r4,l1,t10) (r7,l4,t10)... (r7,l4,t19)... (r1,l3,t21) (r2,l3,t21) (r4,l3,t21) (r5,l3,t21)... (r6,l6,t35)... (r2,l5,t40) (r3,l5,t40) (r6,l6,t40)...(r2,l5,t60) (r3,l5,t60) Table 1. Raw RFID Records raw data, data cleaning should be performed. The output after data cleaning is a set of clean stay records of the form (EPC, location, time in, time out) where time in is the time when the object enters the location, and time out is the time when the object leaves the location. Data cleaning of stay records can be accomplished by sorting the raw data on EPC and time, and generating time in and time out for each location by merging consecutive records for the same object staying at the same location. Table 2 presents the RFID database of Table 1 after cleaning. It has been reduced from 188 records to just 17 records. EP C r1 r2 r3 r4 r5 r6 r7 Stay(EP C, location, time in, time out) (r1,l1,t1,t10)(r1,l3,t20,t30) (r2,l1,t1,t10)(r2,l3,t20,t30)(r2,l5,t40,t60) (r3,l1,t1,t10)(r3,l3,t20,t30)(r3,l5,t40,t60) (r4, 1,t1,t10) (r5,l2,t1,t8)(r5,l3,t20,t30)(r5,l5,t40,t60) (r6,l2,t1,t8)(r6,l3,t20,t30)(r6,l6,t35,t50) (r7,l2,t1,t8)(r7,l4,t10,t20) Table 2. A Cleansed RFID Database 3 Architecture of the RFID Warehouse Before we describe our proposed architecture for warehousing RFID data, it is important to describe why a traditional data cube model would fail on such data. Suppose we view the cleansed RFID data as the fact table with dimensions (EP C, location, time in, time out : measure). The data cube will compute all possible group-bys on this fact table by aggregating records that share the same values (or any *) at all possible combinations of dimensions. If we use count as measure, we can get for example the number of items that stayed at a given location for a given month. The problem with this form of aggregation is that it does not consider links between the records. For example, if we want to get the number of items of type dairy product that traveled from the distribution center in Chicago to stores in Urbana, we cannot get this information. We have the count of dairy products for each location but we do not know how many of those items went from the first location to the second. We need a more powerful model capable of aggregating data while preserving its path-like structure. We propose an RFID warehouse architecture that contains a fact table, stay, composed of cleansed RFID records; an information table, info, that stores path-independent information for each item, i.e., SKU information that is constant regardless of the location of the item such as manufacturer, lot number, color, etc.; and a map table that links together different records in the fact table that form a path. We call the stay, info, and map tables aggregated at a given abstraction level an RFID-Cuboid. The main difference between the RFID warehouse and a traditional warehouse is the presence of the map table linking records from the fact table (stay) in order to preserve the original structure of the data. The computation of RFID-Cuboids is more complex than that of regular cuboids as we will need to aggregate the data while preserving the structure of the paths at different abstraction levels. From the data storage and query processing point of view the RFID warehouse can be viewed as a multi-level database. The raw RFID repository resides at the lowest level, on its top are the cleansed RFID database, the minimum abstraction level RFID-Cuboids and a sparse subset of the full cuboid lattice composed of frequently queried (popular) RFID-Cuboids. 3.1 Key ideas of RFID data compression Even with the removal of data redundancy from RFID raw data, the cleansed RFID database is usually still enormous. Here we explore several general ideas for constructing a highly compact RFID data warehouse. Taking advantage of bulky object movements Since a large number of items travel and stay together through several stages, it is important to represent such a collective movement by a single record no matter how many items were originally collected. As an example, if 1,000 boxes of milk stayed in location loc A between time t 1 (time in) and t 2 (time out), it would be advantageous if only one record is registered in the database rather than 1,000 individual RFID records. The record would have the form: (gid, prod, loc A,t 1,t 2, 1000), where 1,000 is the count, prod is the product id, and gid is a generalized id which will not point to the 1,000 original EPCs but instead point to the set of new gids which the current set of objects move to. For example, if this current set of objects were split into 10 partitions, each moving to one distinct location, gid will point to 10 distinct new gids, each representing a record. The process iterates until the end of the object movement where the concrete EPCs will be registered. By doing so, no information is lost but the number of records to store such information is substantially reduced. Taking advantage of data generalization 3

4 Since many users are only interested in data at a relatively high abstraction level, data compression can be explored to group, merge, and compress data records. For example, if the minimal granularity of time is hour, then objects moving within the same hour can be seen as moving together and be merged into one movement. Similarly, if the granularity of the location is shelf, objects moving to the different layers of a shelf can be seen as moving to the same shelf and be merged into one. Similar generalization can be performed for products (e.g., merging different sized milk packages) and other data as well. Taking advantage of the merge and/or collapse of path segments In many analysis tasks, certain path segments can be ignored or merged for simplicity of analysis. For example, some non-essential object movements (e.g., from one shelf to another in a store) can be completely ignored in certain data analysis. Some path segments can be merged without affecting the analysis results. For store managers, merging all the movements before the object reaches the store could be desirable. Such merging and collapsing of path segments may substantially reduce the total size of the data and speedup the analysis process. 3.2 RFID-CUBOID With the data compression principles in mind, we propose, the RFID-Cuboid, a data structure, for storing aggregated data in the RFID warehouse. Our design ensures that the data are disk-resident, summarizing the contents of a cleansed RFID database in a compact yet complete manner while allowing efficient execution of both OLAP and tagspecific queries. The RFID-Cuboid consists of three tables: (1) Info, which stores product information for each RFID tag, (2) Stay, which stores information on items that stay together at a location, and (3) Map, which stores path information necessary to link multiple stay records. Information Table The information table stores path-independent dimensions such as product name, manufacturer, product price, product category, etc. Each dimension can have an associated concept hierarchy. All traditional OLAP operations can be performed on these dimensions in conjunction with various RFID-specific analysis. For example, one could drill-down on the product category dimension from clothing to shirts and retrieve shipment information only on shirts. Each entry in Info is a record of the form: (EPC list), (d 1,...,d m ) : (m 1,...,m i ), where the code list contains a set of items that share the same values for dimensions d 1,...,d m, and m 1,...,m i are measures of the given items, e.g., price. Stay Table As mentioned in the introduction, items tend to move and stay together through different locations. Compressing multiple items that stay together at a single location is vital in order to reduce the enormous size of the cleansed RFID database. In real applications items tend to move in large groups. At a distribution center there may be tens of pallets staying together, and then they are broken into individual pallets at the warehouse level. Even if products finally move at the individual item level from a shelf to the checkout counter, our stay compression will save space for all previous steps taken by the item. Each entry in stay is a record of the form: (gids, location, time in, time out) : (m 1,...,m k ), where gids is a set of generalized record ids each pointing to a list of RFID tags or lower level gids, location is the location where the items stayed together, time in is the time when the items entered the location, and time out the time when they left. If the items did not leave the location, time out is NULL. m 1,...,m n are the measures recorded for the stay, e.g., count, average time at location, and the maximal time at location. Table 3 presents the stay table for the cleansed data from Table 2. We have now gone from 188 records in the raw data, to 17 in the cleansed data, and then to 7 in the compressed data. gid loc t1 t2 count measure 0.0 l1 t1 t l2 t1 t l3 t20 t l3 t20 t l4 t10 t , l5 t40 t l6 t35 t Map Table Table 3. Stay Table The map table is an efficient structure that allows query processing to link together stages that belong to the same path in order to perform structure-aware analysis, which could not be answered by a traditional data warehouse. There are two main reasons for using a map table instead of recording the complete EPC lists at each stage: (1) data compression, and (2) query processing efficiency. First, we do not want to record each RFID tag on the EPC list for every stay record it participated in. For example, if we assume that items move in the system in groups of 10000, 1000, 100, and 10 through 4 stages, instead of using 40,000 units of storage for the EPCs in the stay records, we 4

5 use only 1,111 units 2 (1000 for the last stage, 100, 10, and 1 for the ones before). The second and the more important reason for having such a map table is the efficiency in query processing. Suppose each map entry were given a path-dependent label. To compute, for example, the average duration for milk to move from the distribution center (D), to the store backroom (B), and finally to the shelf (S), we need to locate the stay records for milk at each stage. To get three sets of records D, B, and S, one has to intersect the EPC lists of the records in D with those in B and S to get the paths. By using the map, the EPC lists can be orders of magnitude shorter and thus reduce IO costs. The map table contains mappings from higher level gids to lower level ones or EPCs. Each entry in map is a record of the form: gid, (gid 1,...,gid n ), meaning that, gid is composed of all the EPCs pointed to by gid 1,...,gid n. The lowest level gids will point directly to individual items. In order to facilitate query processing we will assign path-dependent labels to high level gids. The label will contain one identifier per location traveled by the items in the gid. 3.3 Hierarchy of RFID-Cuboids Each dimension in the stay and info tables has an associated concept hierarchy. A concept hierarchy is a partial order of mappings from lower levels of abstraction to higher ones. The lowest corresponds to the values in the raw RFID data stream itself, and the associated per item information, which could be at the Stock Keeping Unit (SKU) level. The highest is which represents any value of the dimension. In order to provide fast response to queries specified at various levels of abstraction, it is important to pre-compute some RFID-Cuboids at different levels of the concept hierarchies for the dimensions of the info and stay tables. It is obviously too expensive to compute all the possible generalizations, and partial materialization is a preferred choice. This problem is analogous to determining which set of cuboids in a data cube to materialize in order to answer OLAP queries efficiently given the limitations on storage space and precomputation time. This issue has been studied extensively in the data cube research [1, 6], and the principles are generally applicable to the selective materialization of RFID-Cuboids. In our design, we suggest to compute a set of RFID-Cuboids at the minimal interesting level at which users will be interested in inquiring the database, and a small set of higher level structures that are frequently requested and that can be used to quickly compute nonmaterialized RFID-Cuboids. 2 This figure does not include the size of the map itself which should use 12,221 units of storage, still much smaller than the full EPC lists An RFID-Cuboid residing at the minimal interesting level will be computed directly from the cleansed RFID database and will be the lowest cuboid that can be queried unless one has to dig directly into the cleansed data in some very special cases. 4 Construction of an RFID Warehouse In this section, we study the construction of the RFID warehouse from the cleansed RFID database. In order to construct the RFID-Cuboid, as described in the paper, we need a compact data structure that allows us to do the following efficiently: (1) assign path-dependent labels to the gids, (2) minimize the number of output stay records while computing aggregates that preserve the pathlike nature of the data, and (3) identify path segments that can be collapsed. We argue that using a tree-like structure to represent the different paths in the database is an ideal solution, where each node in the tree will represent a path stage; all common path prefixes in the database will share the same branch in the tree. Using the tree we can traverse the nodes in the breath-first fashion while assigning path-dependent labels. We can quickly determine the minimum number of output stay records by aggregating the measures of all the items that share the same branch in the tree. And we can collapse path segments by simply merging parent/child nodes that correspond to the same location. Additionally, the tree can be constructed by doing a single scan of the cleansed RFID database, and it can be discarded after we have materialized the output info, stay, and map tables. Algorithm 1 summarizes the method for constructing an RFID-Cuboid from the cleansed RFID database. It takes as input the clean stay records S, the info records I describing each item, and the level of abstraction for each dimension L. The output of the algorithm is an RFID-Cuboid represented by the stay, info, and map tables aggregated at the desired abstraction level. The algorithm first aggregates the information table to the level of abstraction specified by L. Then we call Build- PathTree (Algorithm 2) to construct a prefix tree for the paths in the cleansed database. Using the tree we collapse consecutive nodes with the same location, generate the path-dependent gids, create a list of distinct measures at each node, and finally for each node we generate the output stay records. 4.1 Construction of Higher Level RFID-Cuboids from Lower Level Ones Once we have constructed the minimum abstraction level RFID-Cuboid, it is possible to gain efficiency by constructing higher level RFID-Cuboids starting from the existing RFID-Cuboid instead of directly from the cleansed RFID 5

6 data. This can be accomplished by running Algorithm 1 with the stay and info tables of the lower level RFID-Cuboid as input, and using the map table of the input RFID-Cuboid to expand each gid to the EPCs it points to. The benefit of such approach is that the input stay and info tables can be significantly smaller than the cleansed tables, thus greatly gaining in space and time efficiency. Algorithm 1 BuildCuboid Input: Stay records S, Information records I, aggregation level L Output: RFID-Cuboid Method: 1: I = aggregate I to L; 2: path tree = BuildPathTree (S, L); 3: Merge consecutive nodes with the same location in the path tree; 4: Traverse path tree in the breath-first order and assign gid to each node (gid = parent.gid +. + unique id), where the parent/children gid relation in the path tree defines the output MAP; 5: Compute aggregate measures for each node, one per group of items with the same information record and in the same leaf; 6: Create S by traversing path tree in the breath-first order, and generating stay records for each node (Multiple nodes can contribute to the same stay record); 7: Output MAP, S, and I Observation. Given a cleansed stay and info input tables, the RFID-Cuboid structure has the following properties: 1. (Construction cost) The RFID-Cuboid can be constructed by doing a single sequential scan on the cleansed stay table. 2. (Completeness) The RFID cuboid contains sufficient information to reconstruct the original RFID database aggregated at abstraction level L. 3. (Compactness) The number of records in the output stay and info tables is no larger than the number of records in the input stay and info tables respectively. Rationale. The first property has been shown in the RFID-Cuboid construction algorithm. The completeness property can be proved by using the map table to expand the gids for each stay record to get the original data. The compactness property is proved noticing that Algorithm 1 emits at most one output record per distinct measure per node, and the number of distinct measures in a node is limited by the number of input stay records. The size of the output info table, by definition of the group-by operation is bound by the size of the input info table. Algorithm 2 BuildPathTree Input: Stay S, aggregation level L Output: path tree Method: 1: root = new node; 2: for each record s in S do 3: s = aggregate s to level L; 4: parent = lookup node for s.rfid; 5: if parent == NULL then 6: parent = root; 7: end if 8: node = lookup s.rfid in parent s children; 9: if node == NULL then 10: node = new node; 11: node.loc = s.loc; 12: node.t1 = s.t1; 13: node.t2 = s.t2; 14: add node to parent s children; 15: end if 16: node.measure list += s.gid, s.measure ; 17: end for 18: return root 5 Query Processing In this section we discuss the implementation of the basic OLAP operations, i.e., drill-down, roll-up, slice, and dice, applied to the RFID data warehouse, and introduce a new operation, Path Selection, relevant to the paths traveled by items. Given the very large size and high dimensionality of the RFID warehouse we can only materialize a small fraction of the total number of RFID-Cuboids. We will compute the RFID-Cuboid that resides at the minimum abstraction layer that is interesting to users, and those RFID-Cuboids that are frequently requested. Initially, the warehouse designer may decide to materialize a subset of RFID-Cuboids that are interesting to the users, and as the query log is built, we can perform frequency counting to determine frequently requested RFID-Cuboids that should be pre-computed. When a roll-up or drill-down operation requires an RFID-Cuboid that has not yet been materialized, it would have to be computed on the fly from an existing RFID-Cuboid that is close to the required one but at a lower abstraction level. The slice and dice operations can be implemented quite efficiently by using relational query execution and optimization techniques. An example of the dice operation could be: Give me the average time that milk stays at the shelf. This query can be answered by the relational expression: σ stay.location= shelf,info.product= milk (stay gid info). Path Selection 6

7 Path queries, which ask about information related to the structure of object traversal paths, are unique to the RFID warehouse since the concept of object movements is not modeled in traditional data warehouses. It is essential to allow users to inquire about an aggregate measure computed based on a predefined sequence of locations (path). One such example could be: What is the average time for milk to go from farms to stores in Illinois?. Queries on the paths traveled by items are fundamental to many RFID applications and will be the building block on top of which more complex data mining operators can be implemented. We will illustrate this point with a real example. The United States government is currently in the process of requiring the containers arriving into the country, by ship, to carry an RFID tag. The information can be used to determine if the path traveled by a given container has deviated from its historic path. This application may need to first execute a path-selection query across different time periods, and then use outlier detection and clustering to analyze the relevant paths. More formally, a path selection query is of the form: q σ c info, (σ c1 stage 1,...,σ ck stage k ), where σ c info means the selection on the info table based on condition c, and σ ci stage i means the selection based on condition c i on the stay table. The result of a path selection query is a set of paths whose stages match the stage conditions in the correct order (possibly with gaps), and whose items match the condition c. The query expression for the example path query presented above is c product = milk, c 1 location = farm, and c 2 location = store. We can compute aggregate measures on the results of a path selection query, e.g., for the example query the aggregate measure would be the average time. Algorithm 3 illustrates the process of selecting the gids matching a given query. We first select the gids for the stay records that match the conditions for the initial and final stages of the query expression. For example, g start may look like 1.2, 8.3.1, 3.4 and g end may look like , 4.3, We then compute the pairs of gids from g start that are a prefix of a gid in g end. Continuing with the example we get the pairs (1.2, ), (3.4, 3.4.3). For each pair we then retrieve all the stay records. The pair (1.2, ) would require us to retrieve stay records that include gids 1.2, 1.2.4, and Finally, we verify that each of these records matches the selection conditions for each stage i and for info, and add those paths to the answer set. If we have statistics on query selectivity, it may be possible to find a better optimization query execution plan than that presented in Algorithm 3. If we have a sequence of stages (stage 1,..., stage k ), we could retrieve the records for the most selective stages, in addition to retrieving the stay records for stage 1 and stage k, in order to further prune the search space. Algorithm 3 PathSelection Input: q σ c info, (σ c1 stage 1,...,σ ck stage k ), RFID warehouse. Output: the paths that match query conditions, q. Method: 1: g start = select gids of stay records matching the condition at stage 1 ; 2: g end = select gids of stay records matching the condition at stage k and that for info; 3: for every pair of gids (s, e) in g start,g end such that s is a prefix of e do 4: path = retrieve stay records for all gids from s to e; 5: if the stay records in path match conditions for info and for the remaining stages then 6: answer = answer + path; 7: end if 8: end for 9: return answer 6 Performance Study In this section, we perform a thorough analysis of our model and algorithms. All experiments were implemented using C++ and were conducted on an Intel Xeon 2.5GHz (512KB L2 cache) system with 3GB of RAM. The system ran Red Hat Linux with the kernel and gcc Data Synthesis The RFID databases in our experiments were generated using a tree model for object movements. Each node in the tree represents a set of items in a location, and an edge represents a movement of objects between locations. We assumed that items at locations near the root of the tree move in larger groups, while items near the leaves move in smaller groups. The size of the groups at each level of the tree define the bulkiness, B = (s 1,s 2,...,s k ), where s i is the number of objects that stay and move together at level i of the tree. By making s i s j for i>jwe create the effect of items moving in larger groups near the factory and distribution centers, and smaller groups at the store level. We generated the databases for the experiments by randomly constructing a set of trees with a given level of Bulkiness, and generating the cleansed RFID records corresponding to the item movements indicated by the edges in the tree. As a notational convenience, we use the following symbols to denote certain dataset parameters. 7

8 B =(s 1,...,s k ) k P N 6.2 RFID-Cuboid compression Path Bulkiness Average path length Number of products Number of cleansed RFID records The RFID-Cuboids form the basis for future query processing and analysis. As mentioned previously, the advantage of these data structures is that they aggregate and collapse many records in the cleansed RFID database. Here, we examine the effects of this compression on different data sets. We will compare two different compression strategies, both use the stay and info tables, but one uses the map table as described in the paper (map), whereas the other uses an EPC list to record the items at each stay record (nomap). Figure 1 shows the size of the cleansed RFID database (clean) compared with the map and nomap RFID-Cuboids. The datasets contains 1,000 distinct products, traveling in groups of 500, 150, 40, 8, and 1 through 5 path stages, and 500 thousand to 10 million cleansed RFID records. The RFID-Cuboid is computed at the same level of abstraction of the cleansed RFID data, and thus the compression is lossless. As it can be seen from Figure 1 the RFID-Cuboid that uses the map has a compression power of around 80% while the one that uses EPC lists has a compression power of around 65%. Size (MBytes) ,1 0, Input Stay Records (millions) clean nomap map Figure 1. Compression vs. Cleansed Data Size. P = 1000, B = (500, 150, 40, 8, 1), k =5. Figure 2 also shows the size of the cleansed RFID database (clean) compared with the map and nomap RFID-Cuboids. In this case we vary the degree of bulkiness of the paths, e.g., the number of tags that stay and move together through the system. We define 5 levels of bulkiness a = (500, 230, 125, 63, 31), b = (500, 250, 83, 27, 9), c = (500, 150, 40, 8, 1), d = (200, 40, 8, 1, 1), and e = (100, 10, 1, 1, 1). The bulkiness decreases from dataset a to e. As it can be seen in the figure, for more bulky data the RFID-Cuboid that uses the map clearly outperforms the nomap cuboid; as we move towards less bulky data the benefits of the map decrease as we get many entries in the map that point to just one gid. For paths where a significant portion of the stages are traveled by a single item the benefit of the map disappears and we are better off using EPC lists. A possible solution to this problem is to compress all map entries that have a single child into one. Size (MBytes) a b c d e Path Bulkiness clean nomap map Figure 2. Compression vs. Data Bulkiness. P = 1000, N =1, 000, 000, k =5. Figure 3 shows the compression obtained by climbing along the concept hierarchies of the dimensions in the stay and info tables. Level-0 cuboids have the same level in the hierarchy as the cleansed RFID data. The three higher level cuboids offer one, two, and three levels of aggregation respectively at all dimensions (location, time, product, manufacturer, color). As expected the size of the cuboids at higher levels decreases. In general the cuboid using the map is smaller, but for the top most level of abstraction the size is the same as for the nomap cuboid. At level 3 the size of the stay table is just 96 records, and most of the space is actually used by recording the RFID tags themselves and thus it makes little difference if we use a map or not. Size (MBytes) Abstraction Level nomap map Figure 3. Compression vs. Abstraction Level. P = 1000, B = (500, 150, 40, 8, 1), k =5, N = 1, 000, 000. Figure 4 shows the time to build the RFID-Cuboids at the same four levels of abstraction used in Figure 3. In all cases the cuboid was constructed starting from the cleansed database. We can see that cuboid construction time does not significantly increase with the level of abstraction. This is expected as the only portion of the algorithm that incurs extra cost for higher levels of abstraction is the aggregation of 8

9 the info table, and in this case it contains only 1,000 entries. This is common as we expect the cleansed RFID stay table to be orders of magnitude larger than the info table. The computation of RFID-Cuboids can also be done from lower level cuboids instead of doing it from the cleansed database. For the cuboids 1 to 3 of Figure 4 we can obtain savings of 50% to 80% in computation time if we build cuboid i from cuboid i 1. Seconds Input Stay Records (millions) Cuboid 0 Cuboid 1 Cuboid 2 Cuboid 3 Figure 4. Construction Time. P = 1000, B = (500, 150, 40, 8, 1), k = Query Processing A major contribution of the RFID data warehouse model is the ability to efficiently answer many types of queries at various levels of aggregation. In this section, we show this efficiency in several settings. We compare query executing under three scenarios: the first is a system that directly uses the cleansed RFID database (clean), the second one that uses the stay table but no map, it instead uses EPC lists at each stay record (nomap), and the third is the RFID-Cuboid described in the paper using stay and map tables (map). We assume that for each of the scenarios we have a B+Tree on each of the dimensions. In the case of the map cuboid the index points to a list of gids matching the index entry. In the case of the nomap cuboid and the cleansed database the index points to the tuple (RF ID tag, record id). This is necessary as each RFID tag can be present in multiple records. The query answering strategy used for the map cuboid is the one presented in Algorithm 3. The strategy for the other two cases is to retrieve the (RF ID tag, record id) pairs matching each component of the query, intersecting them, and finally retrieving the relevant records. For the experiments we assumed that we have a page size of 4096 bytes, and that RFID tags, record ids, and gids use 4 bytes each. We also assume that all the indices fit in memory except for the last level. For each of the experiments we generated 100 random path queries. The query specifies a product, a varying number of locations (3 on average), and a time range to enter the last stage (time out). Semantically this is equivalent to asking What is the average time for product X to go through locations L 1,...,L k entering location L k between times t 1 t 2?. Figure 5 shows the effect of different cleansed database sizes on query processing. The map cuboid outperforms the cleansed database by several orders of magnitude, and most importantly the query answer time is independent of database size. The nomap cuboid is significantly faster than the cleansed data but it suffers from having to retrieve very long RFID lists for each stage. The map cuboid benefits from using very short gid lists, and using the pathdependent gid naming scheme that facilitates determining if two stay records form a path without retrieving all intermediate stages. I/O Operations (log scale) Input Stay Records (millions) clean nomap map Figure 5. Query Processing I. P = 1000, B = (500, 150, 40, 8, 1), k =5. Figure 6 shows the effects of path bulkiness on query processing. For this experiment we set the number of stay records constant at 1 million. The bulkiness levels are the same as those used for the experiment in Figure 2. As with the compression experiment since we have more bulky paths, the map cuboid is an order of magnitude faster than the cleansed RFID database. As we get less bulky paths, the benefits of compressing multiple stay records decreases until the point at which it is no better than using the cleansed database. The difference between the map and nomap cuboids is almost an order of magnitude for bulky paths, but as in the previous case, for less bulky paths the advantage of using the map decreases. I/O Operations (log scale) Path Bulkiness clean nomap map Figure 6. Query Processing II. P = 1000, k =5. 9

10 7 Related Work RFID technology has been researched from several perspectives: (1) the physics of building tags and readers [3, 7], (2) the techniques required to guarantee privacy and safety [9], and (3) the software architecture required to collect, filter, organize, and answer online queries on tags [2, 4, 8]. The software architecture line of research is the closest to our work but differs in that it is mainly concerned with online transaction processing (OLTP) but not OLAP-based data warehousing. [8] presents a comprehensive framework for online management of RFID data, called the EPC Global Network. [2] presents an overview of RFID data management from a high-level perspective and it introduces the idea of an online warehouse but without going into the detail at the level of data structure or algorithm. An RFID data warehouse shares many common principles with the traditional data cube [5]. They both aggregate data at different levels of abstraction in multi-dimensional space. Since each dimension has an associated concept hierarchy, both can be (at least partially) modelled by a Star schema. The problem of deciding which RFID-Cuboids to construct in order to provide efficient answers to a variety of queries specified at different abstraction levels is analogous to the problem of partial data cube materialization studied in [6, 10]. However, RFID-Cuboid differs from a traditional data cube in that it also models object transitions in multidimensional space. 8 Conclusions We have proposed a novel model for warehousing RFID data that allows high-level analysis to be performed efficiently and flexibly in multidimensional space. The model is composed of a hierarchy of highly compact summaries (RFID-Cuboids) of the RFID data aggregated at different abstraction levels where data analysis can take place. Each RFID-Cuboid records item movements in the stay, info, and map tables that take advantage of the fact that individual items tend to move and stay together (especially at higher abstraction levels) to collapse multiple movements into a single record without loss of information. Our performance study shows that the size of RFID-Cuboids at interesting abstraction levels can be orders of magnitude smaller than the original RFID database and can be constructed efficiently. Moreover, we show the power of our data structures and algorithms in efficient answering of a wide range of RFID queries, especially those related to object transitions. Our study has been focused on efficient data warehousing and OLAP-styled analysis of RFID data. Efficient methods for a multitude of other data mining problems for the RFID data (e.g., trend analysis, outlier detection, path clustering) remain open and should be a promising line of future research. Notice that our proposal of the RFID model and its subsequent methods for warehouse construction and query analysis is based on the assumption that RFID data tend to move together in bulky mode, especially at the early stage. This fits a good number of RFID applications, such as supply chain management. However, there are also other applications where RFID data may not have such characteristics. We believe that further research is needed to construct efficient models for such applications. References [1] S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP technology. SIGMOD Record, 26:65 74, [2] S. Chawathe, V. Krishnamurthy, S. Ramachandran, and S. Sarma. Managing RFID data. In Proc. Intl. Conf. on Very Large Databases (VLDB 04). [3] K. Finkenzeller. RFID Handbook: Fundamentals and Applications in Contactless Smart Cards and Identification. John Wiley and Sons, [4] C. Floerkemeier, D. Anarkat, T. Osinski, and M. Harrison. PML core specification 1.0. White paper, MIT Auto-ID Center, org/standards technology/secure/v1.0/ PML Core Specification v1.0.pdf, [5] J. Gray, S. Chaudhuri, A. Bosworth, A. Layman, D. Reichart, M. Venkatrao, F. Pellow, and H. Pirahesh. Data cube: A relational aggregation operator generalizing group-by, crosstab and sub-totals. Data Mining and Knowledge Discovery, 1:29 54, [6] V. Harinarayan, A. Rajaraman, and J. D. Ullman. Implementing data cubes efficiently. In Proc ACM-SIGMOD Int. Conf. Management of Data (SIGMOD 96). [7] S. Sarma. Integrating RFID. ACM Queue, 2(7):50 57, October [8] S. Sarma, D. L. Brock, and K. Ashton. The networked physical world. White paper, MIT Auto-ID Center, publishedresearch/mit-autoid-wh-001.pdf, [9] S. E. Sarma, S. A. Weis, and D. W. Engels. RFID systems, security & privacy implications. White paper, MIT Auto-ID Center, publishedresearch/mit-autoid-wh-014.pdf, [10] A. Shukla, P. Deshpande, and J. F. Naughton. Materialized view selection for multidimensional datasets. In Proc Int. Conf. Very Large Data Bases (VLDB 98). [11] Venture Development Corporation (VDC). vdc-corp.com/. 10

Warehousing and Analyzing Massive RFID Data Sets

Warehousing and Analyzing Massive RFID Data Sets Warehousing and Analyzing Massive RFID Data Sets Hector Gonzalez Jiawei Han Xiaolei Li Diego Klabjan University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA {hagonzal, hanj, xli10, klabjan}@uiuc.edu

More information

Modeling Massive RFID Datasets: A Gateway-Based Movement-Graph Approach

Modeling Massive RFID Datasets: A Gateway-Based Movement-Graph Approach 1 Modeling Massive RFID Datasets: A Gateway-Based Movement-Graph Approach Hector Gonzalez, Jiawei Han, Hong Cheng, Xiaolei Li, Diego Klabjan Department of Computer Science University of Illinois at Urbana-Champaign

More information

FlowCube: Constructing RFID FlowCubes for Multi-Dimensional Analysis of Commodity Flows

FlowCube: Constructing RFID FlowCubes for Multi-Dimensional Analysis of Commodity Flows FlowCube: Constructing RFID FlowCubes for Multi-Dimensional Analysis of Commodity Flows Hector Gonzalez Jiawei Han Xiaolei Li University of Illinois at Urbana-Champaign, IL, USA {hagonzal, hanj, xli10}@uiuc.edu

More information

FlowCube: Constructing RFID FlowCubes for Multi-Dimensional Analysis of Commodity Flows

FlowCube: Constructing RFID FlowCubes for Multi-Dimensional Analysis of Commodity Flows FlowCube: Constructing RFID FlowCubes for Multi-Dimensional Analysis of Commodity Flows ABSTRACT Hector Gonzalez Jiawei Han Xiaolei Li University of Illinois at Urbana-Champaign, IL, USA {hagonzal, hanj,

More information

c 2008 Hector Gonzalez

c 2008 Hector Gonzalez c 2008 Hector Gonzalez MINING MASSIVE MOVING OBJECT DATASETS FROM RFID DATA FLOW ANALYSIS TO TRAFFIC MINING BY HECTOR GONZALEZ B.S., Universidad Eafit, 1995 M.B.A., Harvard University, 1999 M.S., University

More information

Novel Materialized View Selection in a Multidimensional Database

Novel Materialized View Selection in a Multidimensional Database Graphic Era University From the SelectedWorks of vijay singh Winter February 10, 2009 Novel Materialized View Selection in a Multidimensional Database vijay singh Available at: https://works.bepress.com/vijaysingh/5/

More information

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores Announcements Shumo office hours change See website for details HW2 due next Thurs

More information

Using Tiling to Scale Parallel Data Cube Construction

Using Tiling to Scale Parallel Data Cube Construction Using Tiling to Scale Parallel Data Cube Construction Ruoming in Karthik Vaidyanathan Ge Yang Gagan Agrawal Department of Computer Science and Engineering Ohio State University, Columbus OH 43210 jinr,vaidyana,yangg,agrawal

More information

M. P. Ravikanth et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 3 (3), 2012,

M. P. Ravikanth et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 3 (3), 2012, An Adaptive Representation of RFID Data Sets Based on Movement Graph Model M. P. Ravikanth, A. K. Rout CSE Department, GMR Institute of Technology, JNTU Kakinada, Rajam Abstract Radio Frequency Identification

More information

DATA MINING AND WAREHOUSING

DATA MINING AND WAREHOUSING DATA MINING AND WAREHOUSING Qno Question Answer 1 Define data warehouse? Data warehouse is a subject oriented, integrated, time-variant, and nonvolatile collection of data that supports management's decision-making

More information

Data Mining Concepts & Techniques

Data Mining Concepts & Techniques Data Mining Concepts & Techniques Lecture No. 01 Databases, Data warehouse Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

More information

An Overview of various methodologies used in Data set Preparation for Data mining Analysis

An Overview of various methodologies used in Data set Preparation for Data mining Analysis An Overview of various methodologies used in Data set Preparation for Data mining Analysis Arun P Kuttappan 1, P Saranya 2 1 M. E Student, Dept. of Computer Science and Engineering, Gnanamani College of

More information

CHAPTER 8: ONLINE ANALYTICAL PROCESSING(OLAP)

CHAPTER 8: ONLINE ANALYTICAL PROCESSING(OLAP) CHAPTER 8: ONLINE ANALYTICAL PROCESSING(OLAP) INTRODUCTION A dimension is an attribute within a multidimensional model consisting of a list of values (called members). A fact is defined by a combination

More information

Question Bank. 4) It is the source of information later delivered to data marts.

Question Bank. 4) It is the source of information later delivered to data marts. Question Bank Year: 2016-2017 Subject Dept: CS Semester: First Subject Name: Data Mining. Q1) What is data warehouse? ANS. A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile

More information

Chapter 12: Indexing and Hashing. Basic Concepts

Chapter 12: Indexing and Hashing. Basic Concepts Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition

More information

Data Mining Course Overview

Data Mining Course Overview Data Mining Course Overview 1 Data Mining Overview Understanding Data Classification: Decision Trees and Bayesian classifiers, ANN, SVM Association Rules Mining: APriori, FP-growth Clustering: Hierarchical

More information

Chapter 12: Indexing and Hashing

Chapter 12: Indexing and Hashing Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

Evaluating XPath Queries

Evaluating XPath Queries Chapter 8 Evaluating XPath Queries Peter Wood (BBK) XML Data Management 201 / 353 Introduction When XML documents are small and can fit in memory, evaluating XPath expressions can be done efficiently But

More information

Data Warehousing and Data Mining. Announcements (December 1) Data integration. CPS 116 Introduction to Database Systems

Data Warehousing and Data Mining. Announcements (December 1) Data integration. CPS 116 Introduction to Database Systems Data Warehousing and Data Mining CPS 116 Introduction to Database Systems Announcements (December 1) 2 Homework #4 due today Sample solution available Thursday Course project demo period has begun! Check

More information

Communication and Memory Optimal Parallel Data Cube Construction

Communication and Memory Optimal Parallel Data Cube Construction Communication and Memory Optimal Parallel Data Cube Construction Ruoming Jin Ge Yang Karthik Vaidyanathan Gagan Agrawal Department of Computer and Information Sciences Ohio State University, Columbus OH

More information

Database design View Access patterns Need for separate data warehouse:- A multidimensional data model:-

Database design View Access patterns Need for separate data warehouse:- A multidimensional data model:- UNIT III: Data Warehouse and OLAP Technology: An Overview : What Is a Data Warehouse? A Multidimensional Data Model, Data Warehouse Architecture, Data Warehouse Implementation, From Data Warehousing to

More information

Data warehouses Decision support The multidimensional model OLAP queries

Data warehouses Decision support The multidimensional model OLAP queries Data warehouses Decision support The multidimensional model OLAP queries Traditional DBMSs are used by organizations for maintaining data to record day to day operations On-line Transaction Processing

More information

Data Warehousing and Data Mining

Data Warehousing and Data Mining Data Warehousing and Data Mining Lecture 3 Efficient Cube Computation CITS3401 CITS5504 Wei Liu School of Computer Science and Software Engineering Faculty of Engineering, Computing and Mathematics Acknowledgement:

More information

HYRISE In-Memory Storage Engine

HYRISE In-Memory Storage Engine HYRISE In-Memory Storage Engine Martin Grund 1, Jens Krueger 1, Philippe Cudre-Mauroux 3, Samuel Madden 2 Alexander Zeier 1, Hasso Plattner 1 1 Hasso-Plattner-Institute, Germany 2 MIT CSAIL, USA 3 University

More information

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality Why Data Preprocessing? Data in the real world is dirty incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data e.g., occupation = noisy: containing

More information

Data Preprocessing. Slides by: Shree Jaswal

Data Preprocessing. Slides by: Shree Jaswal Data Preprocessing Slides by: Shree Jaswal Topics to be covered Why Preprocessing? Data Cleaning; Data Integration; Data Reduction: Attribute subset selection, Histograms, Clustering and Sampling; Data

More information

Fig 1.2: Relationship between DW, ODS and OLTP Systems

Fig 1.2: Relationship between DW, ODS and OLTP Systems 1.4 DATA WAREHOUSES Data warehousing is a process for assembling and managing data from various sources for the purpose of gaining a single detailed view of an enterprise. Although there are several definitions

More information

Data Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A

Data Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A Data Warehousing and Decision Support [R&G] Chapter 23, Part A CS 432 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business

More information

ORACLE DATA SHEET ORACLE PARTITIONING

ORACLE DATA SHEET ORACLE PARTITIONING Note: This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development,

More information

Basics of Dimensional Modeling

Basics of Dimensional Modeling Basics of Dimensional Modeling Data warehouse and OLAP tools are based on a dimensional data model. A dimensional model is based on dimensions, facts, cubes, and schemas such as star and snowflake. Dimension

More information

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation Data Mining Part 2. Data Understanding and Preparation 2.4 Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Normalization Attribute Construction Aggregation Attribute Subset Selection Discretization

More information

International Journal of Computer Sciences and Engineering. Research Paper Volume-6, Issue-1 E-ISSN:

International Journal of Computer Sciences and Engineering. Research Paper Volume-6, Issue-1 E-ISSN: International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-6, Issue-1 E-ISSN: 2347-2693 Precomputing Shell Fragments for OLAP using Inverted Index Data Structure D. Datta

More information

Data Warehousing & Mining. Data integration. OLTP versus OLAP. CPS 116 Introduction to Database Systems

Data Warehousing & Mining. Data integration. OLTP versus OLAP. CPS 116 Introduction to Database Systems Data Warehousing & Mining CPS 116 Introduction to Database Systems Data integration 2 Data resides in many distributed, heterogeneous OLTP (On-Line Transaction Processing) sources Sales, inventory, customer,

More information

Information Management course

Information Management course Università degli Studi di Milano Master Degree in Computer Science Information Management course Teacher: Alberto Ceselli Lecture 05(b) : 23/10/2012 Data Mining: Concepts and Techniques (3 rd ed.) Chapter

More information

ETL and OLAP Systems

ETL and OLAP Systems ETL and OLAP Systems Krzysztof Dembczyński Intelligent Decision Support Systems Laboratory (IDSS) Poznań University of Technology, Poland Software Development Technologies Master studies, first semester

More information

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA Obj ti Objectives Motivation: Why preprocess the Data? Data Preprocessing Techniques Data Cleaning Data Integration and Transformation Data Reduction Data Preprocessing Lecture 3/DMBI/IKI83403T/MTI/UI

More information

HANA Performance. Efficient Speed and Scale-out for Real-time BI

HANA Performance. Efficient Speed and Scale-out for Real-time BI HANA Performance Efficient Speed and Scale-out for Real-time BI 1 HANA Performance: Efficient Speed and Scale-out for Real-time BI Introduction SAP HANA enables organizations to optimize their business

More information

Data Warehousing and OLAP Technologies for Decision-Making Process

Data Warehousing and OLAP Technologies for Decision-Making Process Data Warehousing and OLAP Technologies for Decision-Making Process Hiren H Darji Asst. Prof in Anand Institute of Information Science,Anand Abstract Data warehousing and on-line analytical processing (OLAP)

More information

OLAP Introduction and Overview

OLAP Introduction and Overview 1 CHAPTER 1 OLAP Introduction and Overview What Is OLAP? 1 Data Storage and Access 1 Benefits of OLAP 2 What Is a Cube? 2 Understanding the Cube Structure 3 What Is SAS OLAP Server? 3 About Cube Metadata

More information

Evolution of Database Systems

Evolution of Database Systems Evolution of Database Systems Krzysztof Dembczyński Intelligent Decision Support Systems Laboratory (IDSS) Poznań University of Technology, Poland Intelligent Decision Support Systems Master studies, second

More information

Data Warehousing and Decision Support

Data Warehousing and Decision Support Data Warehousing and Decision Support [R&G] Chapter 23, Part A CS 4320 1 Introduction Increasingly, organizations are analyzing current and historical data to identify useful patterns and support business

More information

Leveraging Set Relations in Exact Set Similarity Join

Leveraging Set Relations in Exact Set Similarity Join Leveraging Set Relations in Exact Set Similarity Join Xubo Wang, Lu Qin, Xuemin Lin, Ying Zhang, and Lijun Chang University of New South Wales, Australia University of Technology Sydney, Australia {xwang,lxue,ljchang}@cse.unsw.edu.au,

More information

Computing Data Cubes Using Massively Parallel Processors

Computing Data Cubes Using Massively Parallel Processors Computing Data Cubes Using Massively Parallel Processors Hongjun Lu Xiaohui Huang Zhixian Li {luhj,huangxia,lizhixia}@iscs.nus.edu.sg Department of Information Systems and Computer Science National University

More information

1. Attempt any two of the following: 10 a. State and justify the characteristics of a Data Warehouse with suitable examples.

1. Attempt any two of the following: 10 a. State and justify the characteristics of a Data Warehouse with suitable examples. Instructions to the Examiners: 1. May the Examiners not look for exact words from the text book in the Answers. 2. May any valid example be accepted - example may or may not be from the text book 1. Attempt

More information

Trajectory Data Warehouses: Proposal of Design and Application to Exploit Data

Trajectory Data Warehouses: Proposal of Design and Application to Exploit Data Trajectory Data Warehouses: Proposal of Design and Application to Exploit Data Fernando J. Braz 1 1 Department of Computer Science Ca Foscari University - Venice - Italy fbraz@dsi.unive.it Abstract. In

More information

IT DATA WAREHOUSING AND DATA MINING UNIT-2 BUSINESS ANALYSIS

IT DATA WAREHOUSING AND DATA MINING UNIT-2 BUSINESS ANALYSIS PART A 1. What are production reporting tools? Give examples. (May/June 2013) Production reporting tools will let companies generate regular operational reports or support high-volume batch jobs. Such

More information

Decision Support, Data Warehousing, and OLAP

Decision Support, Data Warehousing, and OLAP Decision Support, Data Warehousing, and OLAP : Contents Terminology : OLAP vs. OLTP Data Warehousing Architecture Technologies References 1 Decision Support and OLAP Information technology to help knowledge

More information

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer

Segregating Data Within Databases for Performance Prepared by Bill Hulsizer Segregating Data Within Databases for Performance Prepared by Bill Hulsizer When designing databases, segregating data within tables is usually important and sometimes very important. The higher the volume

More information

Cube-Lifecycle Management and Applications

Cube-Lifecycle Management and Applications Cube-Lifecycle Management and Applications Konstantinos Morfonios National and Kapodistrian University of Athens, Department of Informatics and Telecommunications, University Campus, 15784 Athens, Greece

More information

Full file at

Full file at Chapter 2 Data Warehousing True-False Questions 1. A real-time, enterprise-level data warehouse combined with a strategy for its use in decision support can leverage data to provide massive financial benefits

More information

Data warehouse architecture consists of the following interconnected layers:

Data warehouse architecture consists of the following interconnected layers: Architecture, in the Data warehousing world, is the concept and design of the data base and technologies that are used to load the data. A good architecture will enable scalability, high performance and

More information

Data Warehousing and Decision Support

Data Warehousing and Decision Support Data Warehousing and Decision Support Chapter 23, Part A Database Management Systems, 2 nd Edition. R. Ramakrishnan and J. Gehrke 1 Introduction Increasingly, organizations are analyzing current and historical

More information

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing.

This tutorial will help computer science graduates to understand the basic-to-advanced concepts related to data warehousing. About the Tutorial A data warehouse is constructed by integrating data from multiple heterogeneous sources. It supports analytical reporting, structured and/or ad hoc queries and decision making. This

More information

Data Mining. Data warehousing. Hamid Beigy. Sharif University of Technology. Fall 1394

Data Mining. Data warehousing. Hamid Beigy. Sharif University of Technology. Fall 1394 Data Mining Data warehousing Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 1 / 22 Table of contents 1 Introduction 2 Data warehousing

More information

CSE 544 Principles of Database Management Systems. Fall 2016 Lecture 14 - Data Warehousing and Column Stores

CSE 544 Principles of Database Management Systems. Fall 2016 Lecture 14 - Data Warehousing and Column Stores CSE 544 Principles of Database Management Systems Fall 2016 Lecture 14 - Data Warehousing and Column Stores References Data Cube: A Relational Aggregation Operator Generalizing Group By, Cross-Tab, and

More information

CHAPTER-23 MINING COMPLEX TYPES OF DATA

CHAPTER-23 MINING COMPLEX TYPES OF DATA CHAPTER-23 MINING COMPLEX TYPES OF DATA 23.1 Introduction 23.2 Multidimensional Analysis and Descriptive Mining of Complex Data Objects 23.3 Generalization of Structured Data 23.4 Aggregation and Approximation

More information

The strategic advantage of OLAP and multidimensional analysis

The strategic advantage of OLAP and multidimensional analysis IBM Software Business Analytics Cognos Enterprise The strategic advantage of OLAP and multidimensional analysis 2 The strategic advantage of OLAP and multidimensional analysis Overview Online analytical

More information

Constructing Object Oriented Class for extracting and using data from data cube

Constructing Object Oriented Class for extracting and using data from data cube Constructing Object Oriented Class for extracting and using data from data cube Antoaneta Ivanova Abstract: The goal of this article is to depict Object Oriented Conceptual Model Data Cube using it as

More information

Data Collection, Preprocessing and Implementation

Data Collection, Preprocessing and Implementation Chapter 6 Data Collection, Preprocessing and Implementation 6.1 Introduction Data collection is the loosely controlled method of gathering the data. Such data are mostly out of range, impossible data combinations,

More information

Coarse Grained Parallel On-Line Analytical Processing (OLAP) for Data Mining

Coarse Grained Parallel On-Line Analytical Processing (OLAP) for Data Mining Coarse Grained Parallel On-Line Analytical Processing (OLAP) for Data Mining Frank Dehne 1,ToddEavis 2, and Andrew Rau-Chaplin 2 1 Carleton University, Ottawa, Canada, frank@dehne.net, WWW home page: http://www.dehne.net

More information

DATA WAREHOUING UNIT I

DATA WAREHOUING UNIT I BHARATHIDASAN ENGINEERING COLLEGE NATTRAMAPALLI DEPARTMENT OF COMPUTER SCIENCE SUB CODE & NAME: IT6702/DWDM DEPT: IT Staff Name : N.RAMESH DATA WAREHOUING UNIT I 1. Define data warehouse? NOV/DEC 2009

More information

Efficient Computation of Data Cubes. Network Database Lab

Efficient Computation of Data Cubes. Network Database Lab Efficient Computation of Data Cubes Network Database Lab Outlines Introduction Some CUBE Algorithms ArrayCube PartitionedCube and MemoryCube Bottom-Up Cube (BUC) Conclusions References Network Database

More information

Chapter 5, Data Cube Computation

Chapter 5, Data Cube Computation CSI 4352, Introduction to Data Mining Chapter 5, Data Cube Computation Young-Rae Cho Associate Professor Department of Computer Science Baylor University A Roadmap for Data Cube Computation Full Cube Full

More information

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective B.Manivannan Research Scholar, Dept. Computer Science, Dravidian University, Kuppam, Andhra Pradesh, India

More information

SA-IFIM: Incrementally Mining Frequent Itemsets in Update Distorted Databases

SA-IFIM: Incrementally Mining Frequent Itemsets in Update Distorted Databases SA-IFIM: Incrementally Mining Frequent Itemsets in Update Distorted Databases Jinlong Wang, Congfu Xu, Hongwei Dan, and Yunhe Pan Institute of Artificial Intelligence, Zhejiang University Hangzhou, 310027,

More information

CS614 - Data Warehousing - Midterm Papers Solved MCQ(S) (1 TO 22 Lectures)

CS614 - Data Warehousing - Midterm Papers Solved MCQ(S) (1 TO 22 Lectures) CS614- Data Warehousing Solved MCQ(S) From Midterm Papers (1 TO 22 Lectures) BY Arslan Arshad Nov 21,2016 BS110401050 BS110401050@vu.edu.pk Arslan.arshad01@gmail.com AKMP01 CS614 - Data Warehousing - Midterm

More information

Chapter 11: Indexing and Hashing

Chapter 11: Indexing and Hashing Chapter 11: Indexing and Hashing Basic Concepts Ordered Indices B + -Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL

More information

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets 2011 Fourth International Symposium on Parallel Architectures, Algorithms and Programming PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets Tao Xiao Chunfeng Yuan Yihua Huang Department

More information

Chapter 6 VIDEO CASES

Chapter 6 VIDEO CASES Chapter 6 Foundations of Business Intelligence: Databases and Information Management VIDEO CASES Case 1a: City of Dubuque Uses Cloud Computing and Sensors to Build a Smarter, Sustainable City Case 1b:

More information

Processing of Very Large Data

Processing of Very Large Data Processing of Very Large Data Krzysztof Dembczyński Intelligent Decision Support Systems Laboratory (IDSS) Poznań University of Technology, Poland Software Development Technologies Master studies, first

More information

Chapter 12: Query Processing. Chapter 12: Query Processing

Chapter 12: Query Processing. Chapter 12: Query Processing Chapter 12: Query Processing Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 12: Query Processing Overview Measures of Query Cost Selection Operation Sorting Join

More information

Teradata Aggregate Designer

Teradata Aggregate Designer Data Warehousing Teradata Aggregate Designer By: Sam Tawfik Product Marketing Manager Teradata Corporation Table of Contents Executive Summary 2 Introduction 3 Problem Statement 3 Implications of MOLAP

More information

Rocky Mountain Technology Ventures

Rocky Mountain Technology Ventures Rocky Mountain Technology Ventures Comparing and Contrasting Online Analytical Processing (OLAP) and Online Transactional Processing (OLTP) Architectures 3/19/2006 Introduction One of the most important

More information

Lectures for the course: Data Warehousing and Data Mining (IT 60107)

Lectures for the course: Data Warehousing and Data Mining (IT 60107) Lectures for the course: Data Warehousing and Data Mining (IT 60107) Week 1 Lecture 1 21/07/2011 Introduction to the course Pre-requisite Expectations Evaluation Guideline Term Paper and Term Project Guideline

More information

Data Warehouse and Data Mining

Data Warehouse and Data Mining Data Warehouse and Data Mining Lecture No. 04-06 Data Warehouse Architecture Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology

More information

Data Warehouse Design Using Row and Column Data Distribution

Data Warehouse Design Using Row and Column Data Distribution Int'l Conf. Information and Knowledge Engineering IKE'15 55 Data Warehouse Design Using Row and Column Data Distribution Behrooz Seyed-Abbassi and Vivekanand Madesi School of Computing, University of North

More information

C-Cubing: Efficient Computation of Closed Cubes by Aggregation-Based Checking

C-Cubing: Efficient Computation of Closed Cubes by Aggregation-Based Checking C-Cubing: Efficient Computation of Closed Cubes by Aggregation-Based Checking Dong Xin Zheng Shao Jiawei Han Hongyan Liu University of Illinois at Urbana-Champaign, Urbana, IL 6, USA Tsinghua University,

More information

1. Inroduction to Data Mininig

1. Inroduction to Data Mininig 1. Inroduction to Data Mininig 1.1 Introduction Universe of Data Information Technology has grown in various directions in the recent years. One natural evolutionary path has been the development of the

More information

This proposed research is inspired by the work of Mr Jagdish Sadhave 2009, who used

This proposed research is inspired by the work of Mr Jagdish Sadhave 2009, who used Literature Review This proposed research is inspired by the work of Mr Jagdish Sadhave 2009, who used the technology of Data Mining and Knowledge Discovery in Databases to build Examination Data Warehouse

More information

A Compression Method for PML Document based on Internet of Things

A Compression Method for PML Document based on Internet of Things Oriental Journal of Computer Science & Technology Vol. 4(1), 159-164 (2011) A Compression Method for PML Document based on Internet of Things JAMAL MOHAMMAD AQIB Research Scholars of Singhania University,

More information

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1395 Data Mining Data preprocessing Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 15 Table of contents 1 Introduction 2 Data preprocessing

More information

WKU-MIS-B10 Data Management: Warehousing, Analyzing, Mining, and Visualization. Management Information Systems

WKU-MIS-B10 Data Management: Warehousing, Analyzing, Mining, and Visualization. Management Information Systems Management Information Systems Management Information Systems B10. Data Management: Warehousing, Analyzing, Mining, and Visualization Code: 166137-01+02 Course: Management Information Systems Period: Spring

More information

Data cube and OLAP. Selecting Views to Materialize. Aggregation view lattice. Selecting views to materialize. Limitations of static approach.

Data cube and OLAP. Selecting Views to Materialize. Aggregation view lattice. Selecting views to materialize. Limitations of static approach. Data cube and OLAP Selecting Views to Materialize CPS 296.1 Topics in Database Systems Example data cube schema: Sale(store, product, customer, quantity) Store, product, customer are dimension attributes

More information

Impact of Data Distribution, Level of Parallelism, and Communication Frequency on Parallel Data Cube Construction

Impact of Data Distribution, Level of Parallelism, and Communication Frequency on Parallel Data Cube Construction Impact of Data Distribution, Level of Parallelism, and Communication Frequency on Parallel Data Cube Construction Ge Yang Department of Computer and Information Sciences Ohio State University, Columbus

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2008 Quiz II

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Database Systems: Fall 2008 Quiz II Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.830 Database Systems: Fall 2008 Quiz II There are 14 questions and 11 pages in this quiz booklet. To receive

More information

SEQUENTIAL PATTERN MINING FROM WEB LOG DATA

SEQUENTIAL PATTERN MINING FROM WEB LOG DATA SEQUENTIAL PATTERN MINING FROM WEB LOG DATA Rajashree Shettar 1 1 Associate Professor, Department of Computer Science, R. V College of Engineering, Karnataka, India, rajashreeshettar@rvce.edu.in Abstract

More information

On-Line Analytical Processing (OLAP) Traditional OLTP

On-Line Analytical Processing (OLAP) Traditional OLTP On-Line Analytical Processing (OLAP) CSE 6331 / CSE 6362 Data Mining Fall 1999 Diane J. Cook Traditional OLTP DBMS used for on-line transaction processing (OLTP) order entry: pull up order xx-yy-zz and

More information

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1394

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1394 Data Mining Data preprocessing Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1394 1 / 15 Table of contents 1 Introduction 2 Data preprocessing

More information

Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India

Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India Web Page Classification using FP Growth Algorithm Akansha Garg,Computer Science Department Swami Vivekanad Subharti University,Meerut, India Abstract - The primary goal of the web site is to provide the

More information

Advanced Database Systems

Advanced Database Systems Lecture IV Query Processing Kyumars Sheykh Esmaili Basic Steps in Query Processing 2 Query Optimization Many equivalent execution plans Choosing the best one Based on Heuristics, Cost Will be discussed

More information

Summary of Last Chapter. Course Content. Chapter 2 Objectives. Data Warehouse and OLAP Outline. Incentive for a Data Warehouse

Summary of Last Chapter. Course Content. Chapter 2 Objectives. Data Warehouse and OLAP Outline. Incentive for a Data Warehouse Principles of Knowledge Discovery in bases Fall 1999 Chapter 2: Warehousing and Dr. Osmar R. Zaïane University of Alberta Dr. Osmar R. Zaïane, 1999 Principles of Knowledge Discovery in bases University

More information

Chapter 12: Query Processing

Chapter 12: Query Processing Chapter 12: Query Processing Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Overview Chapter 12: Query Processing Measures of Query Cost Selection Operation Sorting Join

More information

Decision Support Systems aka Analytical Systems

Decision Support Systems aka Analytical Systems Decision Support Systems aka Analytical Systems Decision Support Systems Systems that are used to transform data into information, to manage the organization: OLAP vs OLTP OLTP vs OLAP Transactions Analysis

More information

Data Warehousing. Overview

Data Warehousing. Overview Data Warehousing Overview Basic Definitions Normalization Entity Relationship Diagrams (ERDs) Normal Forms Many to Many relationships Warehouse Considerations Dimension Tables Fact Tables Star Schema Snowflake

More information

Horizontal Aggregations for Mining Relational Databases

Horizontal Aggregations for Mining Relational Databases Horizontal Aggregations for Mining Relational Databases Dontu.Jagannadh, T.Gayathri, M.V.S.S Nagendranadh. Department of CSE Sasi Institute of Technology And Engineering,Tadepalligudem, Andhrapradesh,

More information

Data Warehousing. Data Warehousing and Mining. Lecture 8. by Hossen Asiful Mustafa

Data Warehousing. Data Warehousing and Mining. Lecture 8. by Hossen Asiful Mustafa Data Warehousing Data Warehousing and Mining Lecture 8 by Hossen Asiful Mustafa Databases Databases are developed on the IDEA that DATA is one of the critical materials of the Information Age Information,

More information

Chapter 3. Foundations of Business Intelligence: Databases and Information Management

Chapter 3. Foundations of Business Intelligence: Databases and Information Management Chapter 3 Foundations of Business Intelligence: Databases and Information Management THE DATA HIERARCHY TRADITIONAL FILE PROCESSING Organizing Data in a Traditional File Environment Problems with the traditional

More information

Adnan YAZICI Computer Engineering Department

Adnan YAZICI Computer Engineering Department Data Warehouse Adnan YAZICI Computer Engineering Department Middle East Technical University, A.Yazici, 2010 Definition A data warehouse is a subject-oriented integrated time-variant nonvolatile collection

More information

2 CONTENTS

2 CONTENTS Contents 4 Data Cube Computation and Data Generalization 3 4.1 Efficient Methods for Data Cube Computation............................. 3 4.1.1 A Road Map for Materialization of Different Kinds of Cubes.................

More information

Guide Users along Information Pathways and Surf through the Data

Guide Users along Information Pathways and Surf through the Data Guide Users along Information Pathways and Surf through the Data Stephen Overton, Overton Technologies, LLC, Raleigh, NC ABSTRACT Business information can be consumed many ways using the SAS Enterprise

More information