Data Preprocessing. Outline. Motivation. How did this happen?

Similar documents
UNIT 2 Data Preprocessing

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality

Data Preprocessing. Slides by: Shree Jaswal

Preprocessing Short Lecture Notes cse352. Professor Anita Wasilewska

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA

By Mahesh R. Sanghavi Associate professor, SNJB s KBJ CoE, Chandwad

2. Data Preprocessing

3. Data Preprocessing. 3.1 Introduction

CS 521 Data Mining Techniques Instructor: Abdullah Mueen

Summary of Last Chapter. Course Content. Chapter 3 Objectives. Chapter 3: Data Preprocessing. Dr. Osmar R. Zaïane. University of Alberta 4

Data Preprocessing. S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha

CS6220: DATA MINING TECHNIQUES

cse634 Data Mining Preprocessing Lecture Notes Chapter 2 Professor Anita Wasilewska

CS570: Introduction to Data Mining

Data Preprocessing. Komate AMPHAWAN

Data Mining: Concepts and Techniques. Chapter 2

K236: Basis of Data Science

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

Data Preprocessing. Erwin M. Bakker & Stefan Manegold.

Data Preprocessing. Data Mining 1

UNIT 2. DATA PREPROCESSING AND ASSOCIATION RULES

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3. Chapter 3: Data Preprocessing. Major Tasks in Data Preprocessing

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining and Analytics. Introduction

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1394

Chapter 2 Data Preprocessing

ECT7110. Data Preprocessing. Prof. Wai Lam. ECT7110 Data Preprocessing 1

Data preprocessing Functional Programming and Intelligent Algorithms

Information Management course

Road Map. Data types Measuring data Data cleaning Data integration Data transformation Data reduction Data discretization Summary

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Data Mining Concepts & Techniques

Data Mining: Concepts and Techniques

Jarek Szlichta

Course on Data Mining ( )

DATA PREPROCESSING. Tzompanaki Katerina

Analyse des Données. Master 2 IMAFA. Andrea G. B. Tettamanzi

DEPARTMENT OF INFORMATION TECHNOLOGY IT6702 DATA WAREHOUSING & DATA MINING

Data Exploration and Preparation Data Mining and Text Mining (UIC Politecnico di Milano)

Data Mining: Concepts and Techniques. Chapter 2

Data Preprocessing. Data Mining: Concepts and Techniques. c 2012 Elsevier Inc. All rights reserved.

Data Preprocessing in Python. Prof.Sushila Aghav

2 CONTENTS. 3.8 Bibliographic Notes... 45

Data Preparation. Data Preparation. (Data pre-processing) Why Prepare Data? Why Prepare Data? Some data preparation is needed for all mining tools

Table Of Contents: xix Foreword to Second Edition

Sponsored by AIAT.or.th and KINDML, SIIT

University of Florida CISE department Gator Engineering. Data Preprocessing. Dr. Sanjay Ranka

Data Preprocessing. Data Preprocessing

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler, Sanjay Ranka

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Data Mining MTAT

Contents. Foreword to Second Edition. Acknowledgments About the Authors

Data Collection, Preprocessing and Implementation

Data Mining: Exploring Data. Lecture Notes for Chapter 3

ETL and OLAP Systems

Slides for Data Mining by I. H. Witten and E. Frank

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

A Survey on Data Preprocessing Techniques for Bioinformatics and Web Usage Mining

Data Mining: Exploring Data. Lecture Notes for Data Exploration Chapter. Introduction to Data Mining

Dta Mining and Data Warehousing

Data Mining Course Overview

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

This tutorial has been prepared for computer science graduates to help them understand the basic-to-advanced concepts related to data mining.

Data Preprocessing. Chapter Why Preprocess the Data?

Cse352 Artifficial Intelligence Short Review for Midterm. Professor Anita Wasilewska Computer Science Department Stony Brook University

Lecture 7: Decision Trees

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

DATA WAREHOUING UNIT I

CS570 Introduction to Data Mining

Machine Learning Techniques for Data Mining

CS378 Introduction to Data Mining. Data Exploration and Data Preprocessing. Li Xiong

Preprocessing and Visualization. Jonathan Diehl

DI TRANSFORM. The regressive analyses. identify relationships

Based on Raymond J. Mooney s slides

Data Preprocessing UE 141 Spring 2013

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

Enterprise Miner Tutorial Notes 2 1

Mineração de Dados Aplicada

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

Data Quality Control: Using High Performance Binning to Prevent Information Loss

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

PSS718 - Data Mining

Clustering Billions of Images with Large Scale Nearest Neighbor Search

Statistical Pattern Recognition

ECLT 5810 Clustering

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Clustering in Data Mining

MSA220 - Statistical Learning for Big Data

Machine Learning Feature Creation and Selection

Data Mining: Exploring Data

2. (a) Briefly discuss the forms of Data preprocessing with neat diagram. (b) Explain about concept hierarchy generation for categorical data.

Web Information Retrieval

CHAPTER-13. Mining Class Comparisons: Discrimination between DifferentClasses: 13.4 Class Description: Presentation of Both Characterization and

SOCIAL MEDIA MINING. Data Mining Essentials

Data Warehousing and Data Mining

Handling Missing Values via Decomposition of the Conditioned Set

Transcription:

Outline Data Preprocessing Motivation Data cleaning Data integration and transformation Data reduction Discretization and hierarchy generation Summary CS 5331 by Rattikorn Hewett Texas Tech University 1 2 Motivation Real-world data are incomplete: missing important attributes, or attribute values, or values giving aggregate data e.g., Age = noisy: erroneous data or outliers e.g., Age = 2000 inconsistent: discrepancies in codes or names or duplicate data e.g., Age = 20 Birthday = 03/07/1960 Sex = F Sex = Female How did this happen? Incomplete data Data are not available when collected Data are not considered important to record Errors: forgot to record, delete to eliminate inconsistent, equipment malfunctions Noisy data Data collected/entered incorrectly due to faulty equipment, human or computer errors Data transmitted incorrectly due to technology limitations (e.g., buffer size) Inconsistent data Different naming conventions in different data sources Functional dependency violation 3 4 1

Relevance to data mining Bad data bad mining results bad decisions duplicate or missing data may cause incorrect or even misleading statistics. consistent integration of quality data good warehouses Data preprocessing aims to improve Quality of data and thus mining results Efficiency and ease of data mining process Measures of Data Quality A well-accepted multidimensional view: Accuracy Completeness Consistency Timeliness, believability, value added, interpretability Accessibility Broad categories: Intrinsic (inherent) Contextual Representational Accessible 5 6 Data Preprocessing Techniques Outline Data cleaning Fill in missing values, smooth out noise, identify or remove outliers, and resolve inconsistencies Data integration Integrate data from multiple databases, data cubes, or files Data transformation Normalize (scale to certain range) Data reduction Obtain reduced representation in volume without sacrificing quality of mining results e.g., dimension reduction remove irrelevant attributes discretization reduce numerical data into discrete data Motivation Data cleaning Data integration and transformation Data reduction Discretization and hierarchy generalization Summary 7 8 2

Data Cleaning Data cleaning is the number one problem in data warehousing DCI survey Tasks Fill in missing values Identify outliers and smooth out noises Correct inconsistent data Resolve redundancy caused by data integration Missing data Ignore the tuple with missing values e.g., in classification when class label is missing not effective when the % of missing values per attribute varies considerably. Fill in the missing value manually tedious + infeasible? Fill in the missing value automatically with global constant e.g., unknown a new class? attribute mean attribute mean for all samples of the same class most probable value e.g., regression-based or inferencebased such as Bayesian formula or decision tree (Ch 7) Which of these three techniques biases the data? 9 10 Noisy data & Smoothing Techniques Noise is a random error or variance in a measured variable Data smoothing techniques: Binning Sort data and partition into (equi-depth) bins (or buckets) Local smoothing by bin means bin median bin boundaries Simple Discretization: Binning Equal-depth (frequency) partitioning: Divides the range into N intervals, each containing approximately same number of samples Good data scaling Managing categorical attributes can be tricky Equal-width (distance/value range) partitioning: Divides the range into N intervals of equal value range (width) if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B A)/N. Results: The most straightforward, but outliers may dominate presentation Skewed data is not handled well. 11 12 3

Examples of binning Smoothing Techniques (cont) Sorted data (e.g., ascending in price) 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 N = data set size = 12, B = number of bins/intervals Partition into three (equi-depth) bins (say, B = 3 each bin has N/3 = 4 elements): Bin 1: 4, 8, 9, 15 For equi-width bins: width = (max-min)/b = (34-4)/3 = 10 Bin 2: 21, 21, 24, 25 I.e., interval range of values is 10 Bin 3: 26, 28, 29, 34 Bin1(0-10): 4,8,9 Smoothing by bin means: Bin2(11-20): 15 Bin 1: 9, 9, 9, 9 Bin3(21-30): 21,21,24,25,26,28,29 Bin 2: 23, 23, 23, 23 Bin4(31-40): 34 outlier can misrepresent the partitioning Bin 3: 29, 29, 29, 29 Smoothing by bin boundaries: min and max are boundaries Each bin value is replaced by closest boundary Bin 1: 4, 4, 4, 15 Bin 2: 21, 21, 25, 25 Bin 3: 26, 26, 26, 34 Regression smooth by fitting the data into regression functions Y1 Y1 y X1 y = x + 1 x 13 14 Smoothing Techniques (cont) Smoothing Techniques (cont) Clustering detect and remove outliers Combined computer and human inspection Automatically detect suspicious values e.g., deviation from known/expected value above threshold Manually select actual surprise vs. garbage 15 16 4

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization and hierarchy generation Summary Data Integration Data integration combines data from multiple sources into a coherent store Issues: Schema integration - how to identify matching of entities from multiple data sources Entity identification problem e.g., A.customer-id B.customer-num Use metadata to help avoid integration errors 17 18 Data Integration (cont) Issues: Redundancy occurs in an attribute when it can be derived from another table can be caused by inconsistent attribute naming can be detected by correlation analysis å ( A - A)( B - B) Corr(A, B) = ( n -1)s AsB +ve highly correlated A and B are redundant 0 independent -ve negatively correlated Are A and B redundant? Data Integration (cont) Issues: Data value conflicts For the same real world entity, attribute values from different sources are different Possible reasons: different representations, coding or scales e.g., weight values in: metric vs. British units cost values: include tax vs. exclude tax Detection and resolving these conflicts require careful data integration Redundancy between attributes detect duplication in tuples 19 20 5

Data transformation Change data into forms for mining. May involves: Smoothing: remove noise from data Cleaning Aggregation: summarization, data cube construction Generalization: concept hierarchy climbing Normalization: scaled to be in a small, specified range min-max normalization z-score normalization normalization by decimal scaling Attribute/feature construction New attributes constructed from the given ones e.g., area width x length, happy rich & healthy Reduction Data normalization Transform data to be in a specific range Useful in Neural net back propagation speedup learning Distance-based mining method (e.g., clustering) prevent attributes with initial large ranges from outweighing those with initial small ranges Three techniques: min-max, z-score and decimal scaling 21 22 Data normalization (cont) Min-max: For a given attribute value range, [min, max] [min, max ] v - min v ' = ( max' -min' ) + min' max - min Can detect out of bound data Outliers may dominate the normalization Data normalization (cont) Z-score (zero-mean) for value v of attribute A Useful when v' = v - A sa Min and max value of A are unknown Outliers dominate the min-max normalization 23 24 6

Z-Score (Example) Data normalization (cont) v v v v 0.18-0.84 Avg 0.68 20 -.26 Avg 34.3 0.60-0.14 sdev 0.59 40.11 sdev 55.9 0.52-0.27 5.55 0.25-0.72 70 4 0.80 0.20 32 -.05 0.55-0.22 8 -.48 0.92 0.40 5 -.53 0.21-0.79 15 -.35 0.64-0.07 250 3.87 0.20-0.80 32 -.05 0.63-0.09 18 -.30 0.70 0.04 10 -.44 0.67-0.02-14 -.87 0.58-0.17 22 -.23 0.98 0.50 45.20 0.81 0.22 60.47 0.10-0.97-5 -.71 0.82 0.24 7 -.49 0.50-0.30 2 -.58 3.00 3.87 4 -.55 Decimal scaling v v'= 10 j, where j is the smallest integer such that Max( v' )<1 Example: values of A range from 300 to 250 j = 1 max( v ) = max( v )/10 = 300/10 > 1 j = 2 max( v ) = max( v )/100 = 300/100 > 1 j = 3 max( v ) = max( v )/1000 = 300/1000 < 1 Thus, x is normalized to x/1000 (e.g., 99.099) 25 26 Outline Motivation Data cleaning Data integration and transformation Data reduction Discretization and hierarchy generation Summary Data Reduction Obtains a reduced representation of the data set that is much smaller in volume and yet produces (almost) the same analytical results Data reduction strategies Data cube aggregation Dimension reduction remove unimportant attributes Data compression Numerosity reduction replace data by smaller models Discretization and hierarchy generation 27 28 7

Data Cube Aggregation Dimension Reduction Aggregation gives summarized data represented in a smaller volume than initial data E.g., total monthly sales (12 entries) vs. total annual sales (one entry) Each cell of a data cube holds an aggregate data value ~ a point in a multi-dimensional space Base cuboid ~ an entity of interest should be a useful unit Aggregate to cuboids at a higher level (of lattice) further reduces the data size Should use the smallest cuboid relevant to OLAP queries Goal: To detect/remove irrelevant/redundant attributes/dimensions of the data Example: Mining to find customer s profile for marketing a product CD s: age vs. phone number Grocery items: Can you name three relevant attributes? Motivation: Irrelevant attributes poor mining results & larger volume of data slower mining process 29 30 Dimension Reduction (cont) Feature selection Feature selection (i.e., attribute subset selection) Goal: To find a minimum set of features such that the resulting probability distribution of the data classes is close to the original distribution obtained using all features Additional Benefit: reduced number of features resulting patterns are easier to understand Issue: Computational complexity 2 d possible subsets for d features Solution Heuristic search, e.g., greedy search Heuristic search Evaluation functions to guide search direction can be Test for statistically significance of attributes What s the drawback of this method? Information-theoretic measures, e.g., information gain (keep attribute that has high information gain to the mining task) as in decision tree induction Search strategies can be step-wise forward selection step-wise backward elimination combining forward selection and backward elimination Optimal branch and bound 31 32 8

Search strategies Feature selection (cont) Stepwise forward Starts with empty set of attribute Iteratively select the best attribute Stepwise backward Starts with a full set of attributes Iteratively remove the worst Combined Each iteration step, add the best and remove the worst attribute Optimal branch and bound Use feature elimination and backtracking Decision tree induction (e.g., ID3, C4.5 see later) Input: data Output: A decision tree that best represents the data Attributes that do not appear on the tree are assumed to be irrelevant (See example next slide) Wrapper approach [Kohavi and John 97] Greedy search to select set of attributes for classification Evaluation function is based on errors obtained from using a mining algorithm (e.g., decision tree induction) for classification 33 34 Example: Decision Tree Induction Initial attribute set: {A1, A2, A3, A4, A5, A6} Data compression Encoding/transformations are applied to obtain compressed representation of the original data A4? A1? A6? Class 1 Class 2 Class 1 Class 2 > Reduced attribute set: {A1, A4, A6} Two types: Lossless compression: can reconstruct original data from the compressed data Lossy compression: can reconstruct only approximation of the original data Original Data Original Data Approximated Compressed Data lossless 35 36 9

Data Compression (cont) Wavelet Transformation Haar2 Daubechie4 String compression There are extensive theories and well-tuned algorithms Typically lossless Allow limited data manipulation Audio/video compression Typically lossy compression, with progressive refinement Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio think of data collection Typically short and vary slowly with time Two important lossy data compression techniques: Wavelet Principal components Discrete wavelet transform (DWT) a linear signal processing technique that transforms, vector of data vector of coeffs (of the same length) Popular wavelet transforms: Haar2, Daubechie4 (the number is associated to properties of coeffs) Approximation of data can be retained by storing only a small fraction of the strongest of the wavelet coefficients Approximated data noises removed without losing features Similar to discrete Fourier transform (DFT), however DWT more accurate (for the same number of coefficients) DWT requires less space 37 38 Wavelet Transformation Haar2 Daubechie4 DWT for Image Compression Method (sketched): Data vector length, L, must be an integer power of 2 (padding with 0s, when necessary) Each transform has 2 functions: smoothing, difference Apply the transform to pairs of data (low and high frequency contents), resulting in two set of data of length L/2 repeat recursively, until reach the desired length Select values from data sets from the above iterations to be the wavelet coefficients of the transformed data Apply the inverse of the DWT used to a set of wavelet coefficients to reconstruct approximation of the original data Good results on sparse, skewed or ordered attribute data better results than JPEG compression 39 40 10

Principal Component Analysis The original data set (N k-dimensional vectors) is reduced to data set of N vectors on c principal components (kdimensional orthogonal vectors that can be best used to represent the data) i.e., N k N c, where c k Each data vector is a linear combination of the c principal component vectors (not necessary a subset of initial attribute set) Works for numeric data only Inexpensive computation - used when the number of dimensions is large 41 Principal Component Analysis Y2 X2 The principle components serve as a new set of axes for the data are ordered by its degree of variance of data E.g., Y1, Y2 are first two PCs for the data on the plane X1X2 Variance of data based on Y1 axis is higher than those of Y2 Y1 X1 42 Numerosity Reduction Parametric methods Assume the data fits some model estimate model parameters store only the parameters discard the actual data (except possible outliers) Examples: regression and Log-linear models Non-parametric methods Do not assume models Store data in reduced representations More amendable to hierarchical structures than parametric method Major families: histograms, clustering, sampling 43 Parametric methods Linear regression Approximates a straight line model: Y = + X Use the least-square method to min error based on distances between the actual data and the model Multiple regression Approximates a linear model with multiple predictors: Y = + 1 X 1 +..+ n X n Can be used to approximate non-linear regression model via transformations (Ch 7) Log-linear model Approximates a model (or probability distribution) of discrete data 44 11

Non-parametric method: Histogram Histograms (cont) Histograms use binning to approximate data distribution For a given attribute, data are partitioned into buckets Each bucket represents a single-value data and its frequency of occurrences Partitioning methods: Equiwidth: each bucket has the same value range Equidepth: each bucket has the same frequency of data occurrences V-optimal: the histogram with the number of buckets that has the least variance (see text for variance ) MaxDiff: use difference between each pair of adjacent values to determine a bucket boundary Example of Max-diff: 1,1,2,2,2,3,5,5,7,8,8,9 A bucket boundary is established between each pair of pairs having 3 largest differences: 1,1,2,2,2,3 5,5,7,8,8 9 Histograms are effective for approximating Sparse vs. dense data Uniform vs. skewed data For multi-dimensional data, histograms are typically effective up to five dimensions 45 46 Non-parametric method: Clustering Non-parametric methods: Sampling Partition data objects into clusters so data objects within the same cluster are similar quality of a cluster can be measured by Max distance between two objects in the cluster Centroid distance average distance of each cluster object from the centroid of the cluster Actual data are reduced to be represented by clusters Some data can t be effectively clustered e.g., smeared data Can have hierarchical clusters Further detailed techniques and definitions in Ch 8 47 Data reduction by finding a representative data sample Sampling techniques: Simple Random Sampling (SRS) WithOut Replacement (SRSWOR) With Replacement (SRSWR) Cluster Sample: Data set are clustered into M clusters Apply SRS to randomly select m of the M clusters Stratified sample - adaptive sampling method Apply SRS to each class (or stratum) of data to ensure that a sample will have representative data from each class When should we use stratified sampling? 48 12

Sampling Raw Data Cluster Sample Raw Data Stratified Sample (40% of each class) Non-parametric methods: Sampling Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the original data Let N be the data set size n be a sample size Cost of obtaining a sample is proportional to Sampling complexity increases linearly as the number of dimensions increase 49 50 Outline Discretization Motivation Data cleaning Data integration and transformation Data reduction Discretization and hierarchy generation Summary Three types of attribute values: Nominal values from an unordered set Ordinal values from an ordered set Continuous real numbers Discretization: divides the range of values of a continuous attribute into intervals Why? Prepare data for analysis by mining algorithms that only accept categorical attributes. Reduce data size 51 52 13

Discretization & Concept hierarchy Entropy Discretization reduces the number of values for a given continuous attribute by dividing the range of the attribute into intervals E.g., age group: [1,10], [11,19], [20,40], [41,59], [60,100] Concept hierarchies reduces the data by collecting and replacing low level concepts by higher level concepts E.g., orange, grapefruit, apple, banana citrus, non-citrus fruit produce 53 Shannon s information theoretic measure - approx. information captured from m 1,,m n Ent({ m1,.., mn}) = - p( mi)log2( p( mi)) For a r.v. X, å i = -å Ent( X ) p( x)log2 p( x) x Example: Toss a balanced coin: H H T H T T H. X = {H, T} P(H) = P(T) = ½ Ent (X) = ½ log 2 (½) ½ log 2 (½) = log 2 (½) = (0 1) = 1 What if the coin is a two-headed coin? Ent(X) = 0 ~ information captured from X is certain 54 Entropy-based discretization Entropy-based discretization (cont) For an attribute value set S, each labeled with a class in C and p i is a probability that class i is in S, then Ent( S) = -å 2 iîc pi log pi Example: Form of element: (Data value, class in C), where C = {A, B} S = {(1, A), (1, B), (3, A), (5, B), (5, B)} Ent(S) = 2/5 log 2 (2/5) 3/5 log 2 (3/5) ~ Information (i.e., classification) captured by data values of S 55 Goal: to discretize an attribute value set S in ways that it maximize information captured by S to classify classes in C If S is partitioned by T into two intervals S1 (, T) and S2 [T, ), the expected class information entropy induced by T is S1 S 2 I ( S, T) = Ent( S1) + Ent( S 2) S S Information gain: Gain(S, T) = Ent(S) I(S, T) Idea: Find T (among possible data points) that minimizes I(S, T) (i.e., max information gain) Recursively find new T to the partitions obtained until some stopping criterion is met, e.g., Gain(S, T) > δ may reduce data size and improve classification accuracy 56 14

Entropy-based discretization (cont) N = number of data points in S Terminate when Gain(S, T) > δ E.g., in [Fayyad & Irani, 1993] = log 2( N -1) f ( S, T) + N N f(s, T) = log 2 (3 k 2) [k Ent(S) k 1 Ent(S 1 ) k 2 (Ent(S 2 )] k i (or k) is the number of class labels represented in set S i (or S) 57 Segmentation by Natural Partitioning Idea: want boundaries of range to be intuitive (or natural) E.g., 50 vs. 52.7 A 3-4-5 rule can be used to segment numeric data into relatively uniform, natural intervals. If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equi-width intervals If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals 58 Example of 3-4-5 Rule count Step 1: -$351 -$159 profit $1,838 $4,700 Min Low (i.e, 5%-tile) High(i.e, 95%- tile) Max Step 2: msd=1,000 Low=-$1,000 High=$2,000 (-$1,000 - $2,000) Step 3: 3,000/1,000 = 3 distinct values 2,000 doesn t cover Max (-$1,000-0) (0 -$ 1,000) ($1,000 - $2,000) Add new interval Make the interval smaller Msd = 1000 so -1000 is closer to -351 Msd = 100 (-$400 -$5,000) Step 4: ($2,000 - $5, 000) Msd = 100 (-$400-0) (0 - $1,000) ($1,000 - $2, 000) 4 distinct values (0 - (-$400 - $200) ($1,000 - $1,200) ($2,000-3,000/1,000 distinct values -$300) 10 distinct values ($200 - $3,000) 3 intervals 5 intervals $400) ($1,200 - (-$300 - $1,400) ($3,000 - -$200) ($400 - ($1,400 - $4,000) (-$200 - $600) $1,600) ($4,000 - -$100) ($600 - ($1,600 - $5,000) $800) ($800 - ($1,800 - $1,800) (-$100 - $1,000) $2,000) 59 0) Hierarchy Generation Categorical Data are Discrete Finite but possibly large (e.g., city, name) No ordering How to create concept hierarchy of categorical data User specified total/partial ordering of attributes explicitly E.g., street<city<state<country) Specify a portion of a hierarchy (by data groupings) E.g., {Texas, Alabama} Southern_US as part of state<country Specify a partial set of attributes E.g., only street < city, not others in dimension location, say Automatically generate partial ordering by analysis of the number of distinct values Heuristic: top level hierarchy (most general) has smallest number of distinct values 60 15

Automatic Hierarchy Generation Some concept hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the given data set The attribute with the most distinct values is placed at the lowest level of the hierarchy Use this with care! E.g., weekday (7), month(12), year(20, say) country state city street 15 distinct values 65 distinct values 3567 distinct values 674,339 distinct values 61 Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary 62 Summary References Data preparation is a big issue for both warehousing and mining Data preparation includes Data cleaning and data integration Data transformation and normalization Data reduction - feature selection,discretization A lot a methods have been developed but still an active area of research E. Rahm and H. H. Do. Data Cleaning: Problems and Current Approaches. IEEE Bulletin of the Technical Committee on Data Engineering. Vol.23, No.4 D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Communications of ACM, 42:73-78, 1999. H.V. Jagadish et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical Committee on Data Engineering, 20(4), December 1997. A. Maydanchik, Challenges of Efficient Data Cleansing (DM Review - Data Quality resource portal) D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999. D. Quass. A Framework for research in Data Cleaning. (Draft 1999) V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and Transformation, VLDB 2001. T. Redman. Data Quality: Management and Technology. Bantam Books, New York, 1992. Y. Wand and R. Wang. Anchoring data quality dimensions ontological foundations. Communications of ACM, 39:86-95, 1996. R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995. http://www.cs.ucla.edu/classes/spring01/cs240b/notes/data-integration1.pdf 63 64 16