Data Preprocessing. Komate AMPHAWAN

Similar documents
Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA

Data Preprocessing. Slides by: Shree Jaswal

UNIT 2 Data Preprocessing

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality

Summary of Last Chapter. Course Content. Chapter 3 Objectives. Chapter 3: Data Preprocessing. Dr. Osmar R. Zaïane. University of Alberta 4

2. Data Preprocessing

CS570: Introduction to Data Mining

Preprocessing Short Lecture Notes cse352. Professor Anita Wasilewska

3. Data Preprocessing. 3.1 Introduction

Data Preprocessing. Data Mining 1

By Mahesh R. Sanghavi Associate professor, SNJB s KBJ CoE, Chandwad

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1394

Data Mining and Analytics. Introduction

Data Preprocessing. S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha

CS 521 Data Mining Techniques Instructor: Abdullah Mueen

ECT7110. Data Preprocessing. Prof. Wai Lam. ECT7110 Data Preprocessing 1

K236: Basis of Data Science

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3. Chapter 3: Data Preprocessing. Major Tasks in Data Preprocessing

UNIT 2. DATA PREPROCESSING AND ASSOCIATION RULES

Data preprocessing Functional Programming and Intelligent Algorithms

cse634 Data Mining Preprocessing Lecture Notes Chapter 2 Professor Anita Wasilewska

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

Information Management course

Road Map. Data types Measuring data Data cleaning Data integration Data transformation Data reduction Data discretization Summary

Chapter 2 Data Preprocessing

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Data Preprocessing. Outline. Motivation. How did this happen?

DATA PREPROCESSING. Tzompanaki Katerina

2 CONTENTS. 3.8 Bibliographic Notes... 45

Data Preprocessing. Erwin M. Bakker & Stefan Manegold.

A Survey on Data Preprocessing Techniques for Bioinformatics and Web Usage Mining

Data Preprocessing. Data Mining: Concepts and Techniques. c 2012 Elsevier Inc. All rights reserved.

DEPARTMENT OF INFORMATION TECHNOLOGY IT6702 DATA WAREHOUSING & DATA MINING

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

Data Mining: Concepts and Techniques

CS6220: DATA MINING TECHNIQUES

Data Mining. 2.4 Data Integration. Fall Instructor: Dr. Masoud Yaghini. Data Integration

Data Preprocessing. Chapter Why Preprocess the Data?

Data Mining: Concepts and Techniques. Chapter 2

Sponsored by AIAT.or.th and KINDML, SIIT

Data Preprocessing. Data Preprocessing

University of Florida CISE department Gator Engineering. Data Preprocessing. Dr. Sanjay Ranka

2. (a) Briefly discuss the forms of Data preprocessing with neat diagram. (b) Explain about concept hierarchy generation for categorical data.

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Jarek Szlichta

Data Preprocessing in Python. Prof.Sushila Aghav

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler, Sanjay Ranka

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Data Collection, Preprocessing and Implementation

CS4445 Data Mining and Knowledge Discovery in Databases. A Term 2008 Exam 2 October 14, 2008

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Data Preparation. Data Preparation. (Data pre-processing) Why Prepare Data? Why Prepare Data? Some data preparation is needed for all mining tools

Data Mining Concepts & Techniques

Table Of Contents: xix Foreword to Second Edition

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

Contents. Foreword to Second Edition. Acknowledgments About the Authors

Web Information Retrieval

Data Engineering. Data preprocessing and transformation

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Slides for Data Mining by I. H. Witten and E. Frank

Data Warehousing and Machine Learning

ECLT 5810 Clustering

Preprocessing DWML, /33

Clustering in Data Mining

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

COMP 465: Data Mining Classification Basics

ECLT 5810 Clustering

Analyse des Données. Master 2 IMAFA. Andrea G. B. Tettamanzi

Data Mining Course Overview

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

CT75 (ALCCS) DATA WAREHOUSING AND DATA MINING JUN

Cse352 Artifficial Intelligence Short Review for Midterm. Professor Anita Wasilewska Computer Science Department Stony Brook University

Data Preprocessing UE 141 Spring 2013

Statistical Pattern Recognition

Statistical Pattern Recognition

Data Mining: Exploring Data. Lecture Notes for Chapter 3

MSA220 - Statistical Learning for Big Data

This tutorial has been prepared for computer science graduates to help them understand the basic-to-advanced concepts related to data mining.

Basic Data Mining Technique

Gene Clustering & Classification

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Supervised vs. Unsupervised Learning

Data Mining: Exploring Data. Lecture Notes for Data Exploration Chapter. Introduction to Data Mining

GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATIONS (MCA) Semester: IV

Lecture 7: Decision Trees

DI TRANSFORM. The regressive analyses. identify relationships

Question Bank. 4) It is the source of information later delivered to data marts.

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

Clustering and Visualisation of Data

Code No: R Set No. 1

Data warehouses Decision support The multidimensional model OLAP queries

Clustering Part 4 DBSCAN

Preprocessing and Visualization. Jonathan Diehl

SOCIAL MEDIA MINING. Data Mining Essentials

What to come. There will be a few more topics we will cover on supervised learning

Based on Raymond J. Mooney s slides

Machine Learning Techniques for Data Mining

Transcription:

Data Preprocessing Komate AMPHAWAN 1

Data cleaning (data cleansing) Attempt to fill in missing values, smooth out noise while identifying outliers, and correct inconsistencies in the data. 2

Missing value How can you go about filling in the missing values for this attribute? 3

Cause of missing value Incomplete filling data Equipment malfunction (การท างานผ ดปกต ) Inconsistence in deleting data Do not fill data (user think that it s not important) 4

Missing value [1] (1) Ignore the tuple done when the class label is missing (assuming the mining task involves classification) not very effective, unless the tuple contains several attributes with missing values especially poor when the percentage of missing values per attribute varies considerably 5

Missing value [2] (2) Fill in the missing value manually time-consuming may not be feasible given a large data set with many missing values (3) Use a global constant to fill in the missing value Replace all missing attribute values by the same constant, such as a label like Unknown or. 6

Missing value [3] (4) Use the attribute mean to fill in the missing value (5) Use the attribute mean for all samples belonging to the same class as the given tuple if classifying customers according to credit risk, replace the missing value with the average income value for customers in the same credit risk category as that of the given tuple 7

Missing value [4] (6) Use the most probable value to fill in the missing value be determined with regression, inference-based tools using a Bayesian formalism, or decision tree induction. Methods 3 to 6 bias the data. The filled-in value may not be correct. 8

Noisy data What is Noise? 9

Noise is noise means any unwanted sound a random error or variance in a measured variable. Cause of noise Equipment malfunction (การท างานผ ดปกต ) Problems in filling/collecting data: Human error/computer error Data transmission Limited buffer size 10

data smoothing techniques [1] Binning smooth a sorted data value by consulting its neighborhood, that is, the values around it. Consist of 3 steps: o sorted values of data o Partition sorted data into buckets, or bins. o perform local smoothing 11

Example smoothing by bin means, each value in a bin is replaced by the mean value of the bin In smoothing by bin boundaries, the minimum and maximum values in a given bin are identified as the bin boundaries. Each bin value is then replaced by the closest boundary value. 12

data smoothing techniques [2] Regression (การเส อมถอย) smooth by fitting the data to a function, such as with regression. Linear regression involves finding the best line to fit two attributes (or variables), so that one attribute can be used to predict the other. o Y = + x where and are coefficient of regression. Multiple linear regression is an extension of linear regression, where more than two attributes are involved and the data are fit to a multidimensional surface. o Y = b0 + b 1 X 1 + b 2 X 2 + + b m X m where b 0,,b m are coefficient of regression. 13

data smoothing techniques [3] Clustering Outliers may be detected by clustering, where similar values are organized into groups, or clusters. Values that fall outside of the set of clusters may be considered outliers 14

Example 15

DATA INTEGRATION 16

Data Integration [1] Data integration combines data from multiple sources into a coherent (ซ งสอดคล อง) data store, as in data warehousing. These sources may include multiple databases, data cubes, or flat files. Data integration helps to reduce. Data redundancies which may cause data inconsistencies Speed up and improve quality of mining process 17

Data Integration [2] Schema integration uses metadata (data of data) to identify entities in data repositories Example ClusID in database A has the same characteristic as CustNumber in database B Checking and modifying data inconsistencies try to smooth the attributes that have the same characteristic but contain different values: Color: black / bk unit: foot / meter 18

Data Integration [3] Removing data redundancies may use correlation analysis between two attributes/instance: N is the number of tuples, a i and b i are the respective values of A and B in tuple i, σ A and σ B are the respective standard deviations of A and B. -1 r A,B +1. 19

Data Integration [4] If r A,B is greater than 0, then A and B are positively correlated values of A increase as the values of B increase The higher the value, the stronger the correlation a higher value may indicate that A (or B) may be removed as a redundancy 20

Data Integration [5] If r A,B is equal to 0, then A and B are independent and there is no correlation between them. If r A,B is less than 0, then A and B are negatively correlated, the values of one attribute increase as the values of the other attribute decrease. 21

DATA TRANSFORMATION 22

Data Transformation[1] Data are transformed or consolidated (รวมเป นหน ง) into forms appropriate for mining Smoothing: remove noise from the data. Such techniques include binning, regression, and clustering Aggregation: summarize or aggregate data. o the daily sales data may be aggregated so as to compute monthly and annual total amounts. o used in constructing a data cube for analysis of the data at multiple granularities. 23

Data Transformation[2] Generalization: low-level or primitive (raw) data are replaced by higher-level concepts through the use of concept hierarchies. o street can be generalized to city or country. o age may be mapped to concepts, like youth, middleaged, and senior. Normalization: attribute data are scaled so as to fall within a small specified range, such as 1:0 to 1:0, or 0:0 to 1:0. 24

Data Transformation[2] Attribute/feature construction: new attributes are constructed and added from the given set of attributes to help the mining process. 25

Normalization [1] An attribute is normalized by scaling its values so that they fall within a small specified range, such as 0.0 to 1.0. Techniques min-max normalization z-score normalization, normalization by decimal scaling 26

Min-max normalization [1] a linear transformation on the original data. Let min A and max A are the minimum and maximum values of an attribute A. Min-max normalization maps a value, v, of A to v in the range [new_min A,new_max A ] by computing 27

Min-max normalization [2] Min-max normalization preserves the relationships among the original data values. It will encounter an out-of-bounds error if a future input case for normalization falls outside of the original data range for A. 28

Example of Min-max normalization Suppose that the minimum and maximum values for the attribute income are $12,000 and $98,000, respectively. Try to map income to the range [0.0, 1.0] A value of $73,600 for income is transformed to ((73,600 12,000) /(98,000 12,000)) (1.0 0.0) + 0 = 0.716. 29

Z-score (zero-mean) normalization values for an attribute A, are normalized based on the mean and standard deviation of A. A value v of A is normalized to v by computing Z-score is useful when the actual minimum and maximum of attribute A are unknown, or when there are outliers that dominate the min-max normalization. 30

Example of z-score normalization Suppose that the mean and standard deviation of the values for the attribute income are $54,000 and $16,000, respectively. A value of $73,600 for income is transformed to (73,600-54,000) 16,000 = 1.225. 31

Decimal scaling move the decimal point of values of attribute A. The number of decimal points moved depends on the maximum absolute value of A. A value v of A is normalized to v by computing where j is the smallest integer such that Max( v ) < 1 32

Example of Decimal scaling Suppose that the recorded values of A range from -986 to 917. The maximum absolute value of A is 986. To normalize by decimal scaling, we therefore divide each value by 1,000 (i.e., j = 3) so that - 986 normalizes to -0.986 and 917 normalizes to 0.917. 33

Attribute construction New attributes are constructed from the given attributes and added in order to help improve the accuracy and understanding of structure in high-dimensional data. For example, we may add the attribute area based on the attributes height and width. By combining attributes, attribute construction can discover missing information about the relationships between data attributes. 34

35

Data reduction Reduced representation of the data set that is much smaller in volume, yet closely maintains the integrity (ความสมบ รณ ) of the original data. Strategies Data cube aggregation Attribute subset selection Dimensionality reduction Numerosity reduction Discretization and concept hierarchy generation 36

Data Cube Aggregation Data can be aggregated into smaller in volume, without loss of information necessary for the analysis task. 37

Example data cube aggregation Sales data for a given branch of AllElectronics for the years 2002 to 2004. On the left, the sales are shown per quarter. On the right, the data are aggregated to provide the annual sales. 38

Why Attribute Subset Selection is needed? Data sets may contain hundreds of attributes, many of which may be irrelevant/ redundant to the mining task. This can result in discovered patterns of poor quality. irrelevant or redundant attributes can slow down the mining process. 39

Attribute subset selection reduce the data set size by removing irrelevant or redundant attributes. Goal: find a minimum set of attributes such that the resulting probability distribution of the data classes is as close as possible to the original distribution obtained using all attributes. Mining process can reduce the number of attributes appearing in the discovered patterns, helping to make the patterns easier to understand. 40

How can we find a good subset of the original attributes? Greedy: while searching through attribute space, they always make what looks to be the best choice at the time. best (and worst ) attributes are typically determined using tests of statistical significance 41

Greedy heuristic for attribute subset selection. 42

Other techniques [1] Stepwise forward selection: starts with an empty set of attributes as the reduced set At each step, the best of the original attributes is determined and added to the reduced set. Stepwise backward elimination: starts with the full set of attributes. At each step, it removes the worst attribute remaining in the set. 43

Other techniques [2] Combination of forward selection and backward elimination: at each step, the procedure selects the best attribute and removes the worst from among the remaining attributes. Decision tree induction: constructs a tree where each non-leaf node (a tested attribute), and each leaf node (a class prediction). At each node, the algorithm chooses the best attribute to partition the data into individual classes. 44

Dimensionality Reduction Apply data encoding or transformations to obtain a reduced or compressed representation of the original data. Lossless: reconstruct data (from the original data) without any loss of information Lossy: approximately 45 reconstruct data

two popular lossy dimensionality reduction methods Wavelet transforms (Discrete wavelet transform: DWT) Principal components analysis (PCA) 46

Can we reduce the data volume by choosing alternative, smaller forms of data representation? NUMEROSITY REDUCTION 47

Numerosity reduction technique Parametric: estimate the data, stored only the data parameters, instead of the actual data. (Outliers may also be stored). Log-linear models: estimate discrete multidimensional probability distributions. Nonparametric: store reduced representations of the data Histograms Clustering sampling 48

Regression and Log-Linear Models [1] Used to approximate the given data Linear regression: the data are modeled to fit a straight line Multiple linear regression: modeled as a linear function of two or more predictor variables. 49

Regression and Log-Linear Models [2] Log-linear models: used to estimate the probability of each point in a multidimensional space for a set of discretized attributes, based on a smaller subset of dimensional combinations. allow a higher-dimensional data space to be constructed from lower-dimensional spaces. useful for dimensionality reduction and data smoothing. 50

Histograms Use binning to approximate data distributions. A histogram for an attribute A, partitions the data distribution of A into disjoint subsets/buckets. If each bucket represents only a single attributevalue/frequency pair, the buckets are called singleton buckets. Often, buckets instead represent continuous ranges for the given attribute. 51

Example of Histograms a list of prices of commonly sold items at AllElectronics (rounded to the nearest dollar). The numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18, 18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30. 52

Clustering concept partition the objects into clusters, objects within a cluster are similar to one another and dissimilar to objects in other clusters. The quality of a cluster is represented by its Diameter (the maximum distance between any two objects in the cluster). Centroid distance (the average distance of each cluster object from the cluster centroid ) Clustering data reduction, the cluster representations of the data are used to replace the actual data. 53

Sampling allows a large data set to be represented by a much smaller random sample (or subset) of the data. Simple random sample without replacement (SRSWOR) of size s Simple random sample with replacement (SRSWR) of size s: Cluster sample Stratified sample 54

- Simple random sample without replacement (SRSWOR) of size s - Simple random sample with replacement (SRSWR) of size s: 55

56

57

DATA DISCRETIZATION AND CONCEPT HIERARCHY GENERATION 58

Data discretization techniques used to reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals. Interval labels can then be used to replace actual data values If the discretization process uses class information, then we say it is supervised discretization. Otherwise, it is unsupervised. 59

Top-down discretization(splitting) Finding one or a few points (called split points or cut points) to split the entire attribute range, and then repeats this recursively on the resulting intervals. 60

bottom-up discretization(merging) considering all of the continuous values as potential split-points, removes some by merging neighborhood values to form intervals, and then recursively applies this process to the resulting intervals. 61

Concept hierarchies used to reduce the data by collecting and replacing low-level concepts (such as numerical values for the attribute age) with higher-level concepts (such as youth,middleaged, or senior). detail is lost by such data generalization, the generalized data may be more meaningful and easier to interpret 62

63

Discretization and Concept Hierarchy Techniques Generation for Numerical Data binning, histogram analysis, entropy-based discretization, χ2-merging, cluster analysis, discretization by intuitive partitioning. 64

Entropy-Based Discretization [1] is a supervised, top-down splitting technique explores class distribution information in its calculation and determination of split-points (data values for partitioning an attribute range). To discretize a numerical attribute, A, the method selects the value of A that has the minimum entropy as a split-point, and recursively partitions the resulting intervals to arrive at a hierarchical discretization. 65

Basic entropy-based discretization [1] Let D consist of data tuples defined by a set of attributes and a class-label attribute. The class-label attribute provides the class information per tuple. Each value of A can be considered as a split- point to partition the range of A a split-point for A can partition the tuples in D into two subsets satisfying the conditions A split point and A > split point. 66

Basic entropy-based discretization [2] How much more information would we still need for a perfect classification, after this partitioning? This amount is called the expected information requirement for classifying a tuple in D based on partitioning by A. where D 1 and D 2 correspond to the tuples in D satisfying the conditions A split point and A > split point, 67 respectively; D is the number of tuples in D.

Entropy function is calculated based on the class distribution of the tuples in the set Given m classes, C 1,C 2,,C m, the entropy of D 1 is where p i is the probability of class C i in D 1, determined by dividing the number of tuples of class C i in D 1 by D 1, the total number of tuples in D 1. 68

Basic entropy-based discretization [3] When selecting a split-point for attribute A, we want to pick the attribute value that gives the minimum expected information requirement (i.e.,min(infoa(d))) This would result in the minimum amount of expected information (still) required to perfectly classify the tuples after partitioning by A split point and A>split point.

Basic entropy-based discretization [4] The process of determining a split-point is recursively applied to each partition obtained, until some stopping criterion is met, such as when the minimum information requirement on all candidate split-points is less than a small threshold, ε, or when the number of intervals is greater than a threshold, max interval.

Interval Merging by 2 Analysis ChiMerge is a χ2-based discretization method is a top-down, splitting strategy supervised in that it uses class information for accurate discretization, the relative class frequencies should be fairly consistent within an interval if two adjacent intervals have a very similar distribution of classes, then the intervals can be merged. Otherwise, they should remain separate.

ChiMerge is a χ2-based discretization Each distinct value of a numerical attribute A is considered to be one interval. χ2 tests are performed for every pair of adjacent intervals. Adjacent intervals with the least χ2 values are merged together, because low χ2 values for a pair indicate similar class distributions. This merging process proceeds recursively until a predefined stopping criterion is met.

The χ2 value (the Pearson χ2 statistic) where o ij is the observed frequency of the joint event (A i,b j ) and e ij is the expected frequency of (A i,b j ), which can be computed as where N is the number of data tuples, count(a=a i ) is the number of tuples having value a i for A, and count(b = b j ) is the number of tuples having value b j for B.

Cluster Analysis [1] A clustering algorithm can be applied to discretize a numerical attribute A, by partitioning the values of A into clusters or groups. Clustering takes the distribution of A into consideration, as well as the closeness of data points, and therefore is able to produce highquality discretization results.

Cluster Analysis [2] Clustering can be used to generate a concept hierarchy for A by following either a top-down splitting strategy or a bottom-up merging strategy, where each cluster forms a node of the concept hierarchy. each initial cluster or partition may be further decomposed into several subclusters, forming a lower level of the hierarchy Clusters are formed by repeatedly grouping neighboring clusters in order to form higherlevel concepts.

Concept Hierarchy Generation for Categorical Data Categorical data are discrete data. Categorical attributes have a finite (but possibly large) number of distinct values, with no ordering among the values. Examples include geographic location, job category, and item type.

Specification of a partial ordering of attributes explicitly at the schema level by users or experts Concept hierarchies for categorical attributes typically involve a group of attributes. A user or expert can easily define a concept hierarchy by specifying a partial or total ordering of the attributes at the schema level. o For example, a relational database may contain the attributes: street, city, province, and country. A hierarchy can be defined by specifying the total ordering among these attributes at the schema level, such as street < city < province < country.

Specification of a portion of a hierarchy by explicit data grouping specify explicit groupings for a small portion of intermediate-level data. For example, after specifying to form a hierarchy at the schema level, a user could define some intermediate (ตรงกลาง) levels manually, such as o {Alberta, Saskatchewan, Manitoba} prairies (ท งหญ า) Canada and o {British Columbia, prairies Canada} Western Canada.

Specification of a set of attributes, but not of their partial ordering Without knowledge of data semantics, how can a hierarchical ordering for an arbitrary set of categorical attributes be found? a concept hierarchy can be automatically generated based on the number of distinct values per attribute in the given attribute set. The attribute with the most distinct values is placed at the lowest level of the hierarchy. The lower the number of distinct values an attribute has, the higher it is in the generated concept hierarchy.

Example

Specification of only a partial set of attributes the user may have included only a small subset of the relevant attributes in the hierarchy specification. For example, instead of including all of the hierarchically relevant attributes for location, the user may have specified only street and city.

Summary [1] Data cleaning routines attempt to fill in missing values, smooth out noise while identifying outliers, and correct inconsistencies in the data. Data integration combines data from multiple sources to form a coherent data store. Metadata, correlation analysis, data conflict detection, and the resolution of semantic heterogeneity contribute toward smooth data integration.

Summary [2] Data transformation routines convert the data into appropriate forms for mining. For example, attribute data may be normalized so as to fall between a small range, such as 0:0 to 1:0. Data reduction techniques such as data cube aggregation, attribute subset selection, dimensionality reduction, numerosity reduction, and discretization can be used to obtain a reduced representation of the data while minimizing the loss of information content.

Summary [3] Data discretization and automatic generation of concept hierarchies for numerical data can involve techniques such as binning, histogram analysis, entropy-based discretization, χ2 analysis, cluster analysis, and discretization by intuitive partitioning. For categorical data, concept hierarchies may be generated based on the number of distinct values of the attributes defining the hierarchy

Summary [4] Although numerous methods of data preprocessing have been developed, data preprocessing remains an active area of research, due to the huge amount of inconsistent or dirty data and the complexity of the problem.

Q&A 86