Data Warehouse and Data Mining

Similar documents
Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Data Warehousing & OLAP

Data Warehouse and Data Mining

Chapter 3. The Multidimensional Model: Basic Concepts. Introduction. The multidimensional model. The multidimensional model

Data Mining Concepts & Techniques

A Multi-Dimensional Data Model

ALTERNATE SCHEMA DIAGRAMMING METHODS DECISION SUPPORT SYSTEMS. CS121: Relational Databases Fall 2017 Lecture 22

Basics of Dimensional Modeling

Data Warehouse and Data Mining

Data Warehouse and Data Mining

Evolution of Database Systems

ETL and OLAP Systems

Data Warehousing & Mining Techniques

An Overview of Data Warehousing and OLAP Technology

OLAP Introduction and Overview

Data Warehousing. Overview

Data Warehousing & Mining Techniques

Data Warehousing & Data Mining

Data Warehouse and Data Mining

Data Warehousing & Data Mining

2. Summary. 2.1 Basic Architecture. 2. Architecture. 2.1 Staging Area. 2.1 Operational Data Store. Last week: Architecture and Data model

Decision Support Systems aka Analytical Systems

Data warehouse architecture consists of the following interconnected layers:

Advanced Data Management Technologies Written Exam

Data Warehouse. Asst.Prof.Dr. Pattarachai Lalitrojwong

A Star Schema Has One To Many Relationship Between A Dimension And Fact Table

Data Warehouse and Data Mining

Summary. 4. Indexes. 4.0 Indexes. 4.1 Tree Based Indexes. 4.0 Indexes. 19-Nov-10. Last week: This week:

collection of data that is used primarily in organizational decision making.

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 8 - Data Warehousing and Column Stores

Data Mining. Data warehousing. Hamid Beigy. Sharif University of Technology. Fall 1394

Lectures for the course: Data Warehousing and Data Mining (IT 60107)

CT75 DATA WAREHOUSING AND DATA MINING DEC 2015

Database design View Access patterns Need for separate data warehouse:- A multidimensional data model:-

Dta Mining and Data Warehousing

Data Warehousing 2. ICS 421 Spring Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa

Data Warehouses. Yanlei Diao. Slides Courtesy of R. Ramakrishnan and J. Gehrke

Data Warehousing and Decision Support

Data Warehousing and Decision Support. Introduction. Three Complementary Trends. [R&G] Chapter 23, Part A

Tribhuvan University Institute of Science and Technology MODEL QUESTION

Data warehouse design

Fig 1.2: Relationship between DW, ODS and OLTP Systems

Acknowledgment. MTAT Data Mining. Week 7: Online Analytical Processing and Data Warehouses. Typical Data Analysis Process.

Data Warehousing and Decision Support

FROM A RELATIONAL TO A MULTI-DIMENSIONAL DATA BASE

Data Warehousing and Decision Support (mostly using Relational Databases) CS634 Class 20

UNIT

Overview. Introduction to Data Warehousing and Business Intelligence. BI Is Important. What is Business Intelligence (BI)?

Processing of Very Large Data

Data Warehouse Design Using Row and Column Data Distribution

CS614 - Data Warehousing - Midterm Papers Solved MCQ(S) (1 TO 22 Lectures)

Star Schema מחסני נתונים. Star Schema Example 1. Star Schema

Summary of Last Chapter. Course Content. Chapter 2 Objectives. Data Warehouse and OLAP Outline. Incentive for a Data Warehouse

B.H.GARDI COLLEGE OF MASTER OF COMPUTER APPLICATION. Ch. 1 :- Introduction Database Management System - 1

Seminars of Software and Services for the Information Society. Data Warehousing Design Issues

REPORTING AND QUERY TOOLS AND APPLICATIONS

Chapter 13 Business Intelligence and Data Warehouses The Need for Data Analysis Business Intelligence. Objectives

Data Warehousing and OLAP

1. Attempt any two of the following: 10 a. State and justify the characteristics of a Data Warehouse with suitable examples.

Introduction to Data Warehousing

Logical design DATA WAREHOUSE: DESIGN Logical design. We address the relational model (ROLAP)

DATA WAREHOUSE EGCO321 DATABASE SYSTEMS KANAT POOLSAWASD DEPARTMENT OF COMPUTER ENGINEERING MAHIDOL UNIVERSITY

What is a Data Warehouse?

A Novel Approach of Data Warehouse OLTP and OLAP Technology for Supporting Management prospective

IT DATA WAREHOUSING AND DATA MINING UNIT-2 BUSINESS ANALYSIS

Proceedings of the IE 2014 International Conference AGILE DATA MODELS

Information Management course

Data Warehousing Conclusion. Esteban Zimányi Slides by Toon Calders

Exam Datawarehousing INFOH419 July 2013

MIS2502: Data Analytics Dimensional Data Modeling. Jing Gong

Syllabus. Syllabus. Motivation Decision Support. Syllabus

OLAP2 outline. Multi Dimensional Data Model. A Sample Data Cube

Advanced Data Management Technologies

COGNOS (R) 8 GUIDELINES FOR MODELING METADATA FRAMEWORK MANAGER. Cognos(R) 8 Business Intelligence Readme Guidelines for Modeling Metadata

Query Processing with Indexes. Announcements (February 24) Review. CPS 216 Advanced Database Systems

Advanced Multidimensional Reporting

Data Warehouse and Data Mining

Data warehouses Decision support The multidimensional model OLAP queries

Overview. DW Performance Optimization. Aggregates. Aggregate Use Example

MIS2502: Data Analytics Dimensional Data Modeling. Jing Gong

Data Warehouse and Data Mining

Sql Fact Constellation Schema In Data Warehouse With Example

Chapter 4, Data Warehouse and OLAP Operations

Data Modeling and Databases Ch 7: Schemas. Gustavo Alonso, Ce Zhang Systems Group Department of Computer Science ETH Zürich

CHAPTER 8 DECISION SUPPORT V2 ADVANCED DATABASE SYSTEMS. Assist. Prof. Dr. Volkan TUNALI

Decision Support Systems

Management Information Systems MANAGING THE DIGITAL FIRM, 12 TH EDITION FOUNDATIONS OF BUSINESS INTELLIGENCE: DATABASES AND INFORMATION MANAGEMENT

Data Warehouse Logical Design. Letizia Tanca Politecnico di Milano (with the kind support of Rosalba Rossato)

Data Warehouse Testing. By: Rakesh Kumar Sharma

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

DATA MINING TRANSACTION

Questions about the contents of the final section of the course of Advanced Databases. Version 0.3 of 28/05/2018.

IDU0010 ERP,CRM ja DW süsteemid Loeng 5 DW concepts. Enn Õunapuu

Information Management course

Data Warehouses and OLAP. Database and Information Systems. Data Warehouses and OLAP. Data Warehouses and OLAP

QUALITY MONITORING AND

Multidimensional Queries

Aggregating Knowledge in a Data Warehouse and Multidimensional Analysis

Deccansoft Software Services Microsoft Silver Learning Partner. SSAS Syllabus

Advanced Data Management Technologies

Transcription:

Data Warehouse and Data Mining Lecture No. 06 Data Modeling Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro

Data Modeling Conceptual Modeling: DW Modeling Multidimensional Entity Relationship (ME/R) Model Multidimensional UML (muml) Logical Modeling: Cubes, Dimensions, Hierarchies Physical Modeling: Star, Snowflake, Array storage

Goal of the Logical Model Confirm the subject areas Logical Model Create real facts and dimensions from the subjects that have been identified Establish the needed granularity for dimensions Logical structure of the multidimensional model Cubes: Sales, Purchase, Price, Inventory Dimensions: Product, Time, Geography, Client

Logical Model

Dimensions Dimensions are analysis purpose chosen entities, within the data model One dimension can be used to define more than one cube They are hierarchically organized

Dimensions Dimension hierarchies are organized in classification levels (e.g., Day, Month,...) The dependencies between the classification levels are described by the classification schema through functional dependencies An attribute B is functionally dependent on an attribute A, denoted A B, if for all a dom(a) there exists exactly one b dom(b) corresponding to it

Classification schemas Dimensions The classification schema of a dimension D is a semiordered set of classification levels ({D.K 0,..., D.K k }, ) With a smallest element D.K 0, i.e. there is no classification level with smaller granularity A fully-ordered set of classification levels is called a Path If classification schema of the time dimension is considered, then one has the following paths T.Day T.Week and T.Day T.Month T.Quarter T.Year Here T.Day is the smallest element

Dimensions Classification hierarchies Let D.K0... D.Kk be a path in the classification schema of dimension D A classification hierarchy concerning these path is a balanced tree which Has as nodes dom(d.k0) U...U dom(d.kk) U {ALL} And its edges respect the functional dependencies

Dimensions Example: classification hierarchy from the path product dimension

Dimensions Store Dimension Product Dimension Total Total Region Manufacturer District Brand Stores Products

Cubes Cubes consist of data cells with one or more measures If a cube schema S(G,M) consists of a granularity G= (D 1.K 1,..., D n.k n ) and a set M=(M 1,..., M m ) representing the measure A Cube (C) is a set of cube cells, C dom(g) x dom(m)

Cubes The coordinates of a cell are the classification nodes from dom(g) corresponding to the cell Sales ((Article, Day, Store, Client) (Turnover))

Cubes 4 dimensions (supplier, city, quarter, product)

Cubes One can now imagine n-dimensional cubes n-d cube is called a base cuboid The top most cuboid, the 0-D, which holds the highest level of summarization is called apex cuboid - The full data cube is formed by the lattice of cuboids

Cubes But things can get complicated pretty fast

Basic Operations Basic operations of the multidimensional model on the logical level Selection Projection Cube join Sum Aggregation

Basic Operations Multidimensional Selection The selection on a cube C((D 1.K 1,..., D g.k g ), (M 1,..., M m )) through a predicate P, is defined as σp(c) = {z Є C:P(z)}, if all variables in P are either: Classification levels K, which functionally depend on a classification level in the granularity of K, i.e. D i.k i K Measures from (M 1,..., M m ) E.g. σ P.Prod_group= Video (Sales)

Basic Operations Multidimensional projection The projection of a function of a measure F(M) of cube C is defined as π F(M) (C) = { (g,f(m)) dom(g) x dom(f(m)): (g,m) C} E.g., price projection π turnover, sold_items (Sales)

Basic Operations Join operations between cubes is usual E.g. if turnover would not be provided, it could be calculated with the help of the unit price from the price cube 2 cubes C 1 (G 1, M 1 ) and C 2 (G 2, M 2 ) can only be joined, if they have the same granularity (G 1 = G 2 = G) C 1 C 2 = C(G, M 1 M 2 )

Basic Operations When the granularities are different, but there is still need to join the cubes, aggregation has to be performed Aggregation: A whole formed or calculated by the combination of many separate units or items Total E.g., Sales Inventory: aggregate Sales((Day,Article, Store, Client)) to Sales((Month, Article, Store, Client))

Basic Operations Aggregation: most important operation for OLAP operations Aggregation functions Build a single values from set of value, e.g. in SQL: SUM,AVG, Count, Min, Max Example: SUM (P.Product_group, G.City, T.Month) (Sales)

Change support Classification schema, cube schema, classification hierarchy are all designed in the building phase and considered as fix Practice has proven otherwise DW grow old, too Changes are strongly connected to the time factor This lead to the time validity of these concepts Reasons for schema modification New requirements Modification of the data source

Classification Hierarchy E.g. Saturn sells a lot of electronics Lets consider mobile phones They built their DW on 01.03.2003 A classification hierarchy of their data until 01.07.2008 could look like this:

Classification Hierarchy After 01.07.2008 3G becomes hip and affordable and many phone makers start migrating towards 3G capable phones Lets say O2 makes its XDA 3G capable

Classification Hierarchy After 01.04.2010 phone makers already develop 4G capable phones

Classification Hierarchy It is important to trace the evolution of the data It can explain which data was available at which moment in time Such a versioning system of the classification hierarchy can be performed by constructing a validity matrix When is something, valid? Use timestamps to mark it!

Annotated Change data Classification Hierarchy

Classification Hierarchy The tree can be stored as dimension metadata The storage form is a validity matrix Rows are parent nodes Columns are child nodes

Classification Hierarchy Deleting a node in a classification hierarchy Should be performed only in exceptional cases It can lead to information loss How to solve it? Soon GSM phones will not be produced anymore But one might have some more in warehouses, to be delivered Or one might want to query data since when GSM was sold Just mark the end validity date of the GSM branch in the validity matrix

Classification Hierarchy Query classification Having the validity information we can support queries like as is versus as is Regards all the data as if the only valid classification hierarchy is the present one In the case of O2 XDA, it will be considered as it has always been a 3G phone

Classification Hierarchy As is versus as was Orders the classification hierarchy by the validity matrix information O2 XDA was a GSM phone until 01.07.2008 and a 3G phone afterwards

Classification Hierarchy As was versus as was Past time hierarchies can be reproduced E.g., query data with an older classification hierarchy Like versus like Only data whose classification hierarchy remained unmodified, is evaluated E.g. the Nokia 3600 and the Black Berry

Schema Modification Improper modification of a schema (deleting a dimension) can lead to Data loss Inconsistencies Data is incorrectly aggregated or adapted Proper schema modification is complex but It brings flexibility for the end user The possibility to ask As Is vs. As Was queries and so on Alternatives Schema evolution Schema versioning

Schema Modification Schema evolution Modifications can be performed without data loss It involves schema modification and data adaptation to the new schema This data adaptation process is called Instance adaptation

Schema evolution Advantage Schema Modification Faster to execute queries in DW with many schema modifications Disadvantages It limits the end user flexibility to query based on the past schemas Only actual schema based queries are supported

Schema versioning Also no data loss Schema Modification All the data corresponding to all the schemas are always available After a schema modification the data is held in their belonging schema Old data - old schema New data - new schema

Schema versioning Advantages Schema Modification Allows higher flexibility, e.g., As Is vs.as Was, etc. queries Disadvantages Adaptation of the data to the queried schema is done on the spot This results in longer query run time

Physical Model Defining the physical structures Setting up the database environment Performance tuning strategies Indexing Goal Partitioning Materialization Define the actual storage architecture Decide on how the data is to be accessed and how it is arranged

Physical Model Physical implementation of the multidimensional paradigm model can be: Relational Snowflake-schema Star-schema Fast constellation Multidimensional Matrixes

Physical Model Relational model, goals: As low loss of semantically knowledge as possible e.g., classification hierarchies The translation from multidimensional queries must be efficient The RDBMS should be able to run the translated queries efficiently The maintenance of the present tables should be easy and fast e.g., when loading new data

Relational Model Going from multidimensional to relational Representations for cubes, dimensions, classification hierarchies and attributes Implementation of cubes without the classification hierarchies is easy A table can be seen as a cube A column of a table can be considered as a dimension mapping A tuple in the table represents a cell in the cube If one interprets only a part of the columns as dimensions, he/ she can use the rest as measures The resulting table is called a fact table

Relational Model

Relational Model Snowflake-schema Simple idea: use a table for each classification level This table includes the ID of the classification level and other attributes 2 neighbor classification levels are connected by 1:n connections e.g., from n Days to 1 Month The measures of a cube are maintained in a fact table Besides measures, there are also the foreign key IDs for the smallest classification levels

Relational Model Snowflake? The facts/measures are in the center The dimensions spread out in each direction and branch out with their granularity

Snowflake Example

Snowflake Example Advantage: Best performance when queries involve aggregation Disadvantage: Complicated maintenance and metadata, explosion in the number of tables in the database

Snowflake Schema Snowflake schema Advantages With a snowflake schema the size of the dimension tables will be reduced and queries will run faster If a dimension is very sparse (most measures corresponding to the dimension have no data) And/or a dimension has long list of attributes which may be queried Snowflake schema Disadvantages Fact tables are responsible for 90% of the storage requirements Thus, normalizing the dimensions usually lead to insignificant improvements Normalization of the dimension tables can reduce the performance of the DW because it leads to a large number of tables E.g., when connecting dimensions with coarse granularity these tables are joined with each other during queries A query which connects Product category with Year and Country is clearly not performant (10 tables need to be connected)

Relational Model Star schema Basic idea: use a denormalized schema for all the dimensions A star schema can be obtained from the snowflake schema through the denormalization of the tables belonging to a dimension Database normalization is the process of organizing the fields and tables of a relational database to minimize redundancy and dependency Normalization usually involves dividing large tables into smaller (and less redundant) tables and defining relationships between them A de-normalization is the process of attempting to optimize the read performance of a database by adding redundant data or by grouping data

Star schema Example Benefits: Easy to understand, easy to define hierarchies, reduces # of physical joins, low maintenance, very simple metadata Drawbacks: Summary data in the fact table yields poorer performance for summary levels, huge dimension tables a problem

Star Schema Advantages Improves query performance for often-used data Less tables and simple structure Efficient query processing with regard to dimensions Disadvantages In some cases, high overhead of redundant data

Star Schema Store Dimension STORE KEY Store Description City State District ID District Desc. Region_ID Region Desc. Regional Mgr. Level Fact Table STORE KEY PRODUCT KEY PERIOD KEY Dollars Units Price Product Dimension PRODUCT KEY Product Desc. Brand Color Size Manufacturer Level Time Dimension PERIOD KEY Period Desc Year Quarter Month Day Current Flag Resolution Sequence Example: Select A.STORE_KEY, A.PERIOD_KEY, A.dollars from Fact_Table A where A.STORE_KEY in (select STORE_KEY from Store_Dimension B where region = North and Level = 2) The biggest drawback: dimension tables must carry a level indicator for every record and every query must use it. In the example, without the level constraint, keys for all stores in the NORTH region, including aggregates for region and district will be pulled from the fact table, resulting in error. Solution: FACT CONSTELLATION Level is needed whenever aggregates are stored with detail facts.

Fact Constellation Schema FACT Constellation Schema describes a logical database structure of Data Warehouse or Data Mart It can design with collection of de-normalized FACT, Shared and Conformed Dimension tables FACT Constellation Schema is an extended and decomposed STAR Schema In Fact Constellations, aggregate tables are created separately from the detail, therefore, it is impossible to pick up Example, Store detail when querying the District Fact Table

Fact Constellation Schema Fact Constellation is a good alternative to the Star, but when dimensions have very high cardinality, the sub-selects in the dimension tables can be a source of delay An alternative is to normalize the dimension tables by attribute level, with each smaller dimension table pointing to an appropriate aggregated fact table, the Snowflake Schema Advantage: No need for the Level indicator in the dimension tables, since no aggregated data is stored with lower-level detail Disadvantage: Dimension tables are still very large in some cases, which can slow performance; front-end must be able to detect existence of aggregate facts, which requires more extensive metadata

Fact Constellation Example Store Dimension STORE KEY Store Description City State District ID District Desc. Region_ID Region Desc. Regional Mgr. Fact Table STORE KEY PRODUCT KEY PERIOD KEY Dollars Units Price Product Dimension PRODUCT KEY Product Desc. Brand Color Size Manufacturer Time Dimension PERIOD KEY Period Desc Year Quarter Month Day Current Flag Sequence District Fact Table District_ID PRODUCT_KEY PERIOD_KEY Dollars Units Price Region Fact Table Region_ID PRODUCT_KEY PERIOD_KEY Dollars Units Price

Snowflake vs. Star Snowflake The structure of the classifications are expressed in table schemas The fact table and dimension tables are normalized Star The entire classification is expressed in just one table The fact table is normalized while in the dimension table the normalization is broken This leads to redundancy of information in the dimension tables

Snowflake vs. Star Snowflake Star

Snowflake vs. Star Attributes Star Schema Snowflake Schema Ease of maintenance / change Has redundant data and hence less easy to maintain/change No redundancy and hence more easy to maintain and change Ease of Use Query Performance Type of Datawarehouse Less complex queries and easy to understand Less no. of foreign keys and hence lesser query execution time Good for datamarts with simple relationships (1:1 or 1:many) More complex queries and hence less easy to understand More foreign keys-and hence more query execution time Good to use for datawarehouse core to simplify complex relationships (many:many) Joins Fewer Joins Higher number of Joins Dimension table Contains only single dimension table for each dimension It may have more than one dimension table for each dimension When to use Normalization/ De- Normalization When dimension table contains less number of rows, go for Star schema. Both Dimension and Fact Tables are in De- Normalized form When dimension table is relatively big in size, snowflaking is better as it reduces space. Dimension Tables are in Normalized form but Fact Table is still in De- Normalized form Data model Top down approach Bottom up approach!

Snowflake to Star When should one go from Snowflake to star? Heuristics-based decision When typical queries relate to coarser granularity (like product category) When the volume of data in the dimension tables is relatively low compared to the fact table In this case a star schema leads to negligible overhead through redundancy, but performance is improved When modifications on the classifications are rare compared to insertion of fact data In this case these modifications controlled through the data load process of the ETL reducing the risk of data anomalies

It depends on the necessity Which one is winner? Snowflake or Star? Fast query processing or efficient space usage However, most of the time a mixed form is used The Starflake schema: some dimensions stay normalized corresponding to the snowflake schema, while others are denormalized according to the star schema Snowflake schema: The decision on how to deal with the dimensions is influenced by Frequency of the modifications: if the dimensions change often, normalization leads to better results Amount of dimension elements: the bigger the dimension tables, the more space normalization saves Number of classification levels in a dimension: more classification levels introduce more redundancy in the star schema Materialization of aggregates for the dimension levels: if the aggregates are materialized, a normalization of the dimension can bring better response time

More Schemas Galaxies In pratice we usually have more measures described by different dimensions Thus, more fact tables

Fact constellations Pre-calculated aggregates Factless fact tables More Schemas Fact tables do not have non-key data Can be used for event tracking or to inventory the set of possible occurrences Factless fact table does not have any measures For example, consider a record of student attendance in classes. In this case, the fact table would consist of 3 dimensions: the student dimension, the time dimension, and the class dimension.

More Schemas Factless fact tables This factless fact table would look like the following:

Relational Model Relational model disadvantages The representation of the multidimensional data can be implemented relationally with a finite set of transformation steps, however: Multidimensional queries have to be first translated to the relational representation A direct interaction with the relational data model is not fit for the end user

Multidimensional Model The basic data structure for multidimensional data storage is the array The elementary data structures are the cubes and the dimensions C=((D 1,..., D n ), (M 1,..., M m )) The storage is intuitive as arrays of arrays, physically linearized

Multidimensional Model Linearization example: 2D cube D 1 = 5, D 2 = 4, cube cells = 20 Query: Jackets sold in March? Measure stored in cube cell D 1 [4], D 2 [3] The 2D cube is physically stored as a linear array, so D 1 [4], D 2 [3] becomes array cell 14 (Index(D 2 ) 1) * D 1 + Index(D 1 ) Linearized Index=2*5+4=14

Linearization Generalization: Given a cube C=((D 1, D 2,..., D n ), (M 1 :Type 1, M 2 :Type 2,..., M m :Type m )), the index of a cube cell z with coordinates (x 1, x 2,..., x n ) can be linearized as follows: Index(z) = x 1 + (x 2-1) * D 1 + (x 3-1) * D 1 * D 2 +... +(x n -1)* D 1 *...* D n-1 = 1+ i=1 n ((x i -1)* j=1 i-1 D i )

Problems in Array-Storage Influence of the order of the dimensions in the cube definition In the cube the cells of D 2 are ordered one under the other e.g., sales of all pants involves a column in the cube After linearization, the information is spread among more data blocks/pages If one considers a data block can hold 5 cells, a query over all products sold in January can be answered with just 1 block read, but a query of all sold pants, involves reading 4 blocks

Problems in Array-Storage Solution: use caching techniques But...caching and swapping is performed also by the operating system MDBMS has to manage its caches such that the OS doesn t perform any damaging swaps Storage of dense cubes If cubes are dense, array storage is more efficient. However, operations suffer due to the large cubes Solution: store dense cubes not linear but on 2 levels The first contains indexes and the second the data cells stored in blocks Optimization procedures like indexes (trees, bitmaps), physical partitioning, and compression (run-length- encoding) can be used

Problems in Array-Storage Storage of sparse cubes All the cells of a cube, including empty ones, have to be stored Sparseness leads to data being stored in many physical blocks or pages The query speed is affected by the large number of block accesses on the secondary memory Solution: Do not store empty blocks or pages but adapt the index structure 2 level data structure: upper layer holds all possible combinations of the sparse dimensions, lower layer holds dense dimensions

2 level cube storage Problems in Array-Storage