Preprocessing Short Lecture Notes cse352. Professor Anita Wasilewska

Similar documents
Data Preprocessing. Slides by: Shree Jaswal

UNIT 2 Data Preprocessing

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality

2. Data Preprocessing

Data Preprocessing. S1 Teknik Informatika Fakultas Teknologi Informasi Universitas Kristen Maranatha

3. Data Preprocessing. 3.1 Introduction

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

ECT7110. Data Preprocessing. Prof. Wai Lam. ECT7110 Data Preprocessing 1

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3. Chapter 3: Data Preprocessing. Major Tasks in Data Preprocessing

cse634 Data Mining Preprocessing Lecture Notes Chapter 2 Professor Anita Wasilewska

Summary of Last Chapter. Course Content. Chapter 3 Objectives. Chapter 3: Data Preprocessing. Dr. Osmar R. Zaïane. University of Alberta 4

By Mahesh R. Sanghavi Associate professor, SNJB s KBJ CoE, Chandwad

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1395

Data Mining. Data preprocessing. Hamid Beigy. Sharif University of Technology. Fall 1394

CS 521 Data Mining Techniques Instructor: Abdullah Mueen

K236: Basis of Data Science

Data Preprocessing in Python. Prof.Sushila Aghav

UNIT 2. DATA PREPROCESSING AND ASSOCIATION RULES

Data Preprocessing. Data Mining 1

CS6220: DATA MINING TECHNIQUES

Jarek Szlichta

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Information Management course

Data Preprocessing. Komate AMPHAWAN

Data Mining Concepts & Techniques

CS570: Introduction to Data Mining

Chapter 2 Data Preprocessing

Data Mining and Analytics. Introduction

DATA PREPROCESSING. Tzompanaki Katerina

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

Data preprocessing Functional Programming and Intelligent Algorithms

Data Mining: Concepts and Techniques

Data Preprocessing. Erwin M. Bakker & Stefan Manegold.

Data Collection, Preprocessing and Implementation

Data Mining: Concepts and Techniques. Chapter 2

Data Preprocessing. Outline. Motivation. How did this happen?

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

A Survey on Data Preprocessing Techniques for Bioinformatics and Web Usage Mining

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Course on Data Mining ( )

Road Map. Data types Measuring data Data cleaning Data integration Data transformation Data reduction Data discretization Summary

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler, Sanjay Ranka

Data Mining: Exploring Data. Lecture Notes for Chapter 3

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Cse352 Artifficial Intelligence Short Review for Midterm. Professor Anita Wasilewska Computer Science Department Stony Brook University

Data Mining: Exploring Data. Lecture Notes for Data Exploration Chapter. Introduction to Data Mining

Data Preprocessing UE 141 Spring 2013

CS570 Introduction to Data Mining

Data Exploration and Preparation Data Mining and Text Mining (UIC Politecnico di Milano)

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 3

Data Preparation. Data Preparation. (Data pre-processing) Why Prepare Data? Why Prepare Data? Some data preparation is needed for all mining tools

Dta Mining and Data Warehousing

Data Preprocessing. Chapter Why Preprocess the Data?

Data Statistics Population. Census Sample Correlation... Statistical & Practical Significance. Qualitative Data Discrete Data Continuous Data

Data Mining: Exploring Data

Table Of Contents: xix Foreword to Second Edition

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Data Preprocessing. Data Mining: Concepts and Techniques. c 2012 Elsevier Inc. All rights reserved.

Data Quality Control: Using High Performance Binning to Prevent Information Loss

CS378 Introduction to Data Mining. Data Exploration and Data Preprocessing. Li Xiong

2 CONTENTS. 3.8 Bibliographic Notes... 45

University of Florida CISE department Gator Engineering. Data Preprocessing. Dr. Sanjay Ranka

Data mining, 4 cu Lecture 6:

Lecture Topic Projects 1 Intro, schedule, and logistics 2 Applications of visual analytics, data types 3 Data sources and preparation Project 1 out 4

DATA WAREHOUING UNIT I

Data Preprocessing. Data Preprocessing

Contents. Foreword to Second Edition. Acknowledgments About the Authors

Normalization and denormalization Missing values Outliers detection and removing Noisy Data Variants of Attributes Meta Data Data Transformation

MATH 117 Statistical Methods for Management I Chapter Two

2. (a) Briefly discuss the forms of Data preprocessing with neat diagram. (b) Explain about concept hierarchy generation for categorical data.

CHAPTER-13. Mining Class Comparisons: Discrimination between DifferentClasses: 13.4 Class Description: Presentation of Both Characterization and

Sponsored by AIAT.or.th and KINDML, SIIT

Acquisition Description Exploration Examination Understanding what data is collected. Characterizing properties of data.

ECLT 5810 Clustering

Chapter 1, Introduction

ECLT 5810 Clustering

Slides for Data Mining by I. H. Witten and E. Frank

PSS718 - Data Mining

Data Mining MTAT

Data Foundations. Topic Objectives. and list subcategories of each. its properties. before producing a visualization. subsetting

Frequency Distributions

Data Mining: Concepts and Techniques. Chapter 2

Knowledge Discovery and Data Mining

Outline. The History of Histograms. Yannis Ioannidis University of Athens, Hellas

刘淇 School of Computer Science and Technology USTC

R07. FirstRanker. 7. a) What is text mining? Describe about basic measures for text retrieval. b) Briefly describe document cluster analysis.

Data Cleansing. LIU Jingyuan, Vislab WANG Yilei, Theoretical group

Analyse des Données. Master 2 IMAFA. Andrea G. B. Tettamanzi

Data Mining. 2.4 Data Integration. Fall Instructor: Dr. Masoud Yaghini. Data Integration

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

ETL and OLAP Systems

Statistical Pattern Recognition

Table of Contents (As covered from textbook)

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Workload Characterization Techniques

Domestic electricity consumption analysis using data mining techniques

Data Mining. Practical Machine Learning Tools and Techniques. Slides for Chapter 3 of Data Mining by I. H. Witten, E. Frank and M. A.

Classification. Instructor: Wei Ding

Transcription:

Preprocessing Short Lecture Notes cse352 Professor Anita Wasilewska

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization and concept hierarchy generation Summary

TYPES OF DATA (1) Generally we distinguish: Quantitative Data Qualitative Data Bivaluated: attributes have only two values -often very useful Remember: Null Values are not applicable Missing data usually not acceptable

Why Data Preprocessing? Data in the real world is dirty incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data noisy: containing errors or outliers inconsistent: containing discrepancies in codes or names

Why Data Preprocessing? No quality data, no quality results!

Data Quality A well-accepted multidimensional view of data quality: Accuracy Completeness Consistency Timeliness Believability Interpretability Accessibility

Major Tasks in Data Preprocessing Data cleaning Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies Data integration (if needed) Integration of multiple databases, data cubes, or files Data transformation Normalization and aggregation

Major Tasks in Data Preprocessing Data reduction Obtains reduced representation in volume but produces the same or similar analytical results Data discretization particular importance for numerical data; reduces the number of values of attributes Often transform quantitative data into qualitative

Forms of data preprocessing

Data Cleaning Data cleaning tasks Fill in missing values Identify outliers and smooth out noisy data Correct inconsistent data

Missing Data Data is not always available Missing data may be due to equipment malfunction inconsistent with other recorded data and thus deleted data not entered due to misunderstanding certain data may not be considered important at the time of entry not register history or changes of the data Missing data may need to be inferred.

How to Handle Missing Data? Ignore the tuple (record) : usually done when class label (a value of the classification attribute) is missing (assuming the tasks in classification) It is not effective when the percentage of missing values per attribute varies considerably. Fill in the missing value manually: tedious + often infeasible Fill in the missing value automatically (methods to follow)

Fill in Missing Data Use a global constant to fill in the missing value (not efficient, often incorrect or in our case not acceptable) Use the attribute values mean to fill in the missing value In case of the classification: use the attribute values mean for all samples belonging to the same class to fill in the missing value: Use the most probable value to fill in the missing value In case of the classification: use the most probable value for all samples belonging to the same class

Noisy Data Noise: random error or variance in a measured variable (numeric attribute value) Incorrect attribute values may due to faulty data collection instruments, data entry problems, data transmission problems, technology limitation, inconsistency in naming convention

Other Data Problems Other data problems which requires data cleaning duplicate records incomplete data inconsistent data

How to Handle Noisy Data? Binning method: first sort data (values of the attribute we consider) and partition them into (equal-depth) bins then apply one of the methods: smooth by bin means, (replace noisy values in the bin by the bin mean) smooth by bin median, (replace noisy values in the bin by the bin median) smooth by bin boundaries, (replace noisy values in the bin by the bin boundaries) Binning method is also used for data discretization

How to Handle Noisy Data? Clustering Are used to detect and remove outliers in the attributes values, as well as in the whole data set Combined computer and human inspection detect suspicious attribute values and check by human Regression smooth by fitting the attribute values into regression functions

Simple Discretization Methods: Binning Equal-width (distance) partitioning: It divides the range (values of a given attribute) into N intervals of equal size: uniform grid if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B-A)/N The most straightforward But outliers may dominate presentation Skewed data is not handled well.

Simple Discretization Methods: Binning Equal-depth (frequency) partitioning: It divides the range (values of a given attribute) into N intervals, each containing approximately same number of samples (elements) Good data scaling Managing categorical attributes can be tricky; works on the numerical attributes

Binning Methods for Data Smoothing (simple example) Sorted data (attribute values ) for price (attribute: price in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 Partition into (equal-depth) bins: Bin 1: 4, 8, 9, 15 Bin 2: 21, 21, 24, 25 Bin 3: 26, 28, 29, 34 Smoothing by bin means: Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 Smoothing by bin boundaries: Bin 1: 4, 4, 4, 15 Bin 2: 21, 21, 25, 25 Bin 3: 26, 26, 26, 34 Replace all values in a BIN by ONE value (smoothing values)

Binning Methods for Data DISCRETIZATION (simple example) Sorted data (attribute values ) for price (attribute: price in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 Partition into (equal-depth) bins: Bin 1: 4, 8, 9, 15 Bin 2: 21, 21, 24, 25 Bin 3: 26, 28, 29, 34 Smoothing by bin means: Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 CREATES 3 VALUES for the attribute we use the bean mean 9 when numerical values are needed Smoothing by bin boundaries: Bin 1: 4, 4, 4, 15 Bin 2: 21, 21, 25, 25 Bin 3: 26, 26, 26, 34 Replace all values in a BIN by ONE value (smoothing values)

Binning Methods for Data DISCRETIZATION Sorted data (attribute values ) for price (attribute: price in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 Partition into (equal-depth) bins: Bin 1: 4, 8, 9, 15 Bin 2: 21, 21, 24, 25 Bin 3: 26, 28, 29, 34 Smoothing by bin means: Replace all values in a BIN by ONE value (smoothing values) Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 CREATES 3 VALUES for the attribute We use the bean means 9,23,29 when numerical values are needed

Binning Methods for Data DISCRETIZATION Smoothing by bin means: Replace all values in a BIN by ONE value (smoothing values) Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 CREATES 3 VALUES for the attribute We create a BIN Category like: small, medium, large When we want to better express the descriptions We replace a numerical values : 4, 8, 9, 15 in any record by category small We replace a numerical values : 21, 24, 25 in any record by category medium We replace a numerical values : 26, 28, 29, 34 in any record by category large

Cluster Analysis As discretization method it perform clustering on attributes values and replace all values in the cluster by a cluster representative

Data Integration Data integration: combines data from multiple sources into a coherent store Schema integration integrate metadata from different sources Entity identification problem: identify real world entities from multiple data sources, e.g., A.cust-id B.cust-# Detecting and resolving data value conflicts for the same real world entity, attribute values from different sources are different possible reasons: different representations, different scales, e.g., metric vs. British units

Regression and Log-Linear Models Linear regression: Data are modeled to fit a straight line Often uses the least-square method to fit the line Multiple regression: allows a response variable Y to be modeled as a linear function of multidimensional feature vector Log-linear model: approximates discrete multidimensional probability distributions

Linear Regression Use regression analysis on values of attributes to fill missing values. y Y1 Y1 y = x + 1 X1 x

Regression and Log-Linear Models Linear regression: Y = α + β X Two parameters, α and β specify the line and are to be estimated by using the data at hand. using the least squares criterion to the known values of Y1, Y2,, X1, X2,. Multiple regression: Y = b0 + b1 X1 + b2 X2. Many nonlinear functions can be transformed into the above. Log-linear models: The multi-way table of joint probabilities is approximated by a product of lower-order tables. Probability: p(a, b, c, d) = αab βacχad δbcd

Histograms A popular data reduction technique Divide data (or values of a given attribute) into buckets and store average (sum) for each bucket Related to quantization problems. 40 35 30 25 20 15 10 5 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000

Clustering Partition data set (it means the values of an attribute in case of preprocessing) into clusters, and one can store cluster representation only, i.e. replace all values of the cluster by the one value representing this cluster. We also can use hierarchical clustering that can be stored in multi-dimensional index tree structures There are many choices of clustering definitions and clustering algorithms (Chapter 7)

Sampling Sampling allows the learning algorithm to run in complexity that is potentially sub-linear to the size of the data Sampling is a method of choosing a representative subset of the data Simple random sampling may have very poor performance in the presence of skew data There are adaptive sampling methods Stratified sampling: Approximate the percentage of each class (or subpopulation of interest) in the overall database Used in conjunction with skewed data

Sampling SRSWOR (simple random sample without replacement) SRSWR Raw Data

Discretization Three types of attributes: Nominal values from an unordered set Ordinal values from an ordered set Continuous real numbers Discretization: divide the range of a continuous attribute into intervals Some classification algorithms only accept categorical (non- numerical) attributes. Reduce data (attributes values) size by discretization Prepare for further analysis

Discretization and Concept Hierachies for numerical data Discretization reduce the number of values for a given continuous attribute by dividing the range of the attribute (values of the attribute) into intervals. Interval labels are then used to replace actual data values. Concept hierarchies reduce the data by collecting and replacing low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior) transforming numerical attributes into categorical

Discretization and concept hierarchy generation for numeric data Discretization: Binning (see slides before) Histogram analysis (see slides before) Clustering analysis (see slides before) Segmentation by natural partitioning

Segmentation by natural partitioning 3-4-5 rule can be used to segment numeric data (attribute values) into relatively uniform, natural intervals. If an interval covers 3, 6, 7 or 9 distinct values at the most significant digit, partition the range into 3 equi-width intervals If it covers 2, 4, or 8 distinct values at the most significant digit, partition the range into 4 intervals If it covers 1, 5, or 10 distinct values at the most significant digit, partition the range into 5 intervals

Example of 3-4-5 rule count Step 1: -$351 -$159 profit $1,838 $4,700 Min Low (i.e, 5%-tile) High(i.e, 95%-0 tile) Max Step 2: msd=1,000 Low=-$1,000 High=$2,000 Step 3: (-$1,000 - $2,000) (-$1,000-0) (0 -$ 1,000) ($1,000 - $2,000) Step 4: (-$4000 -$5,000) (-$400-0) (0 - $1,000) ($1,000 - $2, 000) ($2,000 - $5, 000) (-$400 - -$300) (-$300 - -$200) (-$200 - -$100) (-$100-0) (0 - $200) ($200 - $400) ($400 - $600) ($600 - $800) ($800 - $1,000) ($1,000 - $1,200) ($1,200 - $1,400) ($1,400 - $1,600) ($1,600 - $1,800) ($1,800 - $2,000) ($2,000 - $3,000) ($3,000 - $4,000) ($4,000 - $5,000)

Concept Hierarchy generation for Concept hierarchy is: Categorical data Specification of a partial ordering of attributes explicitly at the schema level by users or experts Specification of a portion of a hierarchy by explicit data grouping Specification of a set of attributes, but not of their partial ordering Specification of only a partial set of attributes

Specification of a set of attributes Concept hierarchy can be automatically generated based on the number of distinct values per attribute in the given attribute set. The attribute with the most distinct values is placed at the lowest level of the hierarchy. country province_or_ state city street 15 distinct values 65 distinct values 3567 distinct values 674,339 distinct values

Summary Data preparation and preprocessing is a big issue for learning, data mining Data preprocessing includes Data cleaning and data integration Data reduction and attributes selection Discretization A lot a methods have been developed but still an active area of research

Learning Process