Prototyping DM Techniques with WEKA and YALE Open-Source Software

Similar documents
An Introduction to WEKA Explorer. In part from: Yizhou Sun 2008

Classification using Weka (Brain, Computation, and Neural Learning)

DATA ANALYSIS WITH WEKA. Author: Nagamani Mutteni Asst.Professor MERI

Data Mining With Weka A Short Tutorial

Tutorial on Machine Learning Tools

Tanagra: An Evaluation

Machine Learning Techniques for Data Mining

Jue Wang (Joyce) Department of Computer Science, University of Massachusetts, Boston Feb Outline

RAPIDMINER FREE SOFTWARE FOR DATA MINING, ANALYTICS AND BUSINESS INTELLIGENCE

Summary. RapidMiner Project 12/13/2011 RAPIDMINER FREE SOFTWARE FOR DATA MINING, ANALYTICS AND BUSINESS INTELLIGENCE

Slides for Data Mining by I. H. Witten and E. Frank

What is KNIME? workflows nodes standard data mining, data analysis data manipulation

Data Mining. Introduction. Piotr Paszek. (Piotr Paszek) Data Mining DM KDD 1 / 44

Data Mining Practical Machine Learning Tools and Techniques

WEKA Explorer User Guide for Version 3-4

Community edition(open-source) Enterprise edition

8. MINITAB COMMANDS WEEK-BY-WEEK

Contents. Preface to the Second Edition

9. Conclusions. 9.1 Definition KDD

CHAPTER 6 EXPERIMENTS

WEKA homepage.

> Data Mining Overview with Clementine

Using Decision Boundary to Analyze Classifiers

Contents. ACE Presentation. Comparison with existing frameworks. Technical aspects. ACE 2.0 and future work. 24 October 2009 ACE 2

1. Basic Steps for Data Analysis Data Editor. 2.4.To create a new SPSS file

Evaluation Report on PolyAnalyst 4.6

KNIME TUTORIAL. Anna Monreale KDD-Lab, University of Pisa

KNIME for the life sciences Cambridge Meetup

Introduction to Data Science. Introduction to Data Science with Python. Python Basics: Basic Syntax, Data Structures. Python Concepts (Core)

Weka ( )

Application of Data Mining in Manufacturing Industry

Weka: Practical machine learning tools and techniques with Java implementations

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Preface to the Second Edition. Preface to the First Edition. 1 Introduction 1

Artificial Neural Networks (Feedforward Nets)

PROJECT 1 DATA ANALYSIS (KR-VS-KP)

Enterprise Miner Version 4.0. Changes and Enhancements

Gain Greater Productivity in Enterprise Data Mining

Now, Data Mining Is Within Your Reach

Data Science. Data Analyst. Data Scientist. Data Architect

Business Club. Decision Trees

Data Mining. Lab 1: Data sets: characteristics, formats, repositories Introduction to Weka. I. Data sets. I.1. Data sets characteristics and formats

Oracle Big Data Science

Data Mining Practical Machine Learning Tools and Techniques. Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank

Basic Concepts Weka Workbench and its terminology

ACHIEVEMENTS FROM TRAINING

Assignment 1: CS Machine Learning

Supervised Clustering of Yeast Gene Expression Data

GUJARAT TECHNOLOGICAL UNIVERSITY MASTER OF COMPUTER APPLICATIONS (MCA) Semester: IV

Enterprise Miner Software: Changes and Enhancements, Release 4.1

WEKA KnowledgeFlow Tutorial for Version 3-5-6

Gain Insight and Improve Performance with Data Mining

CS6220: DATA MINING TECHNIQUES

Data Mining. Practical Machine Learning Tools and Techniques. Slides for Chapter 3 of Data Mining by I. H. Witten, E. Frank and M. A.

Technical Support Minitab Version Student Free technical support for eligible products

10/14/2017. Dejan Sarka. Anomaly Detection. Sponsors

DATA SCIENCE INTRODUCTION QSHORE TECHNOLOGIES. About the Course:

Practical Data Mining COMP-321B. Tutorial 1: Introduction to the WEKA Explorer

Short instructions on using Weka

The Explorer. chapter Getting started

COMP 6838 Data MIning

Data Mining Overview. CHAPTER 1 Introduction to SAS Enterprise Miner Software

CALUMMA Management Tool User Manual

A Survey of Statistical Modeling Tools

Lecture Topic Projects

WEKA: Practical Machine Learning Tools and Techniques in Java. Seminar A.I. Tools WS 2006/07 Rossen Dimov

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Pre-Requisites: CS2510. NU Core Designations: AD

Information Driven Healthcare:

Effect of Principle Component Analysis and Support Vector Machine in Software Fault Prediction

Right-click on whatever it is you are trying to change Get help about the screen you are on Help Help Get help interpreting a table

Oracle Big Data Science IOUG Collaborate 16

SAS Visual Analytics 8.2: Working with Report Content

6.034 Design Assignment 2

Multi-label classification using rule-based classifier systems

Performance Evaluation of Various Classification Algorithms

Applied Regression Modeling: A Business Approach

COMPARISON OF DIFFERENT CLASSIFICATION TECHNIQUES

Tutorials Case studies

1 Case study of SVM (Rob)

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

SPSS QM II. SPSS Manual Quantitative methods II (7.5hp) SHORT INSTRUCTIONS BE CAREFUL

Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002

PROGRAMMING AND ENGINEERING COMPUTING WITH MATLAB Huei-Huang Lee SDC. Better Textbooks. Lower Prices.

MACHINE LEARNING Example: Google search

Subject. Dataset. Copy paste feature of the diagram. Importing the dataset. Copy paste feature into the diagram.

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines

INTRODUCTION TO DATA MINING. Daniel Rodríguez, University of Alcalá

Minitab 18 Feature List

Data Set. What is Data Mining? Data Mining (Big Data Analytics) Illustrative Applications. What is Knowledge Discovery?

Graphing Calculator Tutorial

Excel Manual X Axis Labels Below Chart 2010 Scatter

Machine Learning in Action

Learn What s New. Statistical Software

SAS Visual Analytics 8.2: Getting Started with Reports

New ensemble methods for evolving data streams

An Effective Performance of Feature Selection with Classification of Data Mining Using SVM Algorithm

K236: Basis of Data Science

MICROSOFT BUSINESS INTELLIGENCE

Transcription:

TIES443 Contents Tutorial 1 Prototyping DM Techniques with WEKA and YALE Open-Source Software Department of Mathematical Information Technology University of Jyväskylä Mykola Pechenizkiy Course webpage: http://www.cs.jyu.fi/~mpechen/ties443 November 7, 2006 1 Brief Review of DM Software Commercial Open-source WEKA http://www.cs.waikato.ac.nz/~ml/weka/index.html YALE http://rapid-i.com/ The R Project for Statistical Computing http://www.r-project.org/ Pentaho whole BI solutions. http://www.pentaho.com/ Matlab Sami will tell you more during the 2nd Tutorial WEKA vs. YALE Comparison Exploration Experimentation Visualization 1 st Assignment http://www.cs.jyu.fi/~mpechen/ties443/tutorials/assignment1.pdf 2 Data Mining Software Many providers of commercial DM software SAS Enterprise Miner, SPSS Clementine, Statistica Data Miner, MS SQL Server, Polyanalyst, KnowledgeSTUDIO, IBM Intelligent Miner. Universities can now receive free copies of DB2 and Intelligent Miner for educational or research purposes. See http://www.kdnuggets.com/software/suites.html for a list Open Source: WEKA (Waikato Environment for Knowledge Analysis) YALE (Yet Another Learning Environment) Many others MLC++, Minitab, AlphaMiner, Rattle, KNIME The Pentaho BI project a pioneering initiative by the Open Source development community to provide organizations with a comprehensive set of BI capabilities that enable them to radically improve business performance, efficiency, and effectiveness. Data Mining with WEKA The following slides are from http://prdownloads.sourceforge.net/weka/weka.ppt by Eibe Frank Copyright: Martin Kramer (mkramer@wxs.nl) 3 4 WEKA: the software WEKA only deals with flat files Machine learning/data mining software written in Java (distributed under the GNU Public License) Used for research, education, and applications Complements Data Mining book by Witten & Frank http://www.cs.waikato.ac.nz/~ml/weka/book.html Main features: Comprehensive set of data pre-processing tools, learning algorithms and evaluation methods Graphical user interfaces (incl. data visualization) Environment for comparing learning algorithms @relation heart-disease-simplified @attribute age numeric @attribute sex { female, male} @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina} @attribute cholesterol numeric @attribute exercise_induced_angina { no, yes} @attribute class { present, not_present} @data 63,male,typ_angina,233,no,not_present 67,male,asympt,286,yes,present 67,male,asympt,229,yes,present 38,female,non_anginal,?,no,not_present... 5 6 1

WEKA only deals with flat files @relation heart-disease-simplified @attribute age numeric @attribute sex { female, male} @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina} @attribute cholesterol numeric @attribute exercise_induced_angina { no, yes} @attribute class { present, not_present} @data 63,male,typ_angina,233,no,not_present 67,male,asympt,286,yes,present 67,male,asympt,229,yes,present 38,female,non_anginal,?,no,not_present... 7 8 Command line tutorial http://weka.sourceforge.net/wekadoc/index.php/en%3aprimer 9 10 Explorer: Pre-processing the Data Data can be imported from a file in various formats: ARFF, CSV, C4.5, binary Data can also be read from a URL or from an SQL database (using JDBC) Pre-processing tools in WEKA are called filters WEKA contains filters for: Discretization, normalization, resampling, attribute selection, transforming and combining attributes, 11 12 2

13 14 15 16 17 18 3

19 20 21 22 23 24 4

25 26 27 28 29 30 5

31 32 Explorer: building classifiers Classifiers in WEKA are models for predicting nominal or numeric quantities Implemented learning schemes include: Decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes nets, Meta -classifiers include: Bagging, boosting, stacking, error-correcting output codes, locally weighted learning, 33 34 35 36 6

37 38 39 40 41 42 7

43 44 45 46 47 48 8

49 50 51 52 53 54 9

55 56 57 58 59 60 10

61 62 63 64 QuickTime and a TIFF (LZW) decompressor are needed to see this picture. 65 66 11

QuickTime and a TIFF (LZW) decompressor are needed to see this picture. QuickTime and a TIFF (LZW) decompressor are needed to see this picture. 67 68 69 70 71 72 12

73 74 75 76 77 78 13

79 80 Explorer: clustering data WEKA contains clusterers for finding groups of similar instances in a dataset Implemented schemes are: k-means, EM, Cobweb, X-means, FarthestFirst Clusters can be visualized and compared to true clusters (if given) Evaluation based on loglikelihood if clustering scheme produces a probability distribution 81 82 83 84 14

85 86 87 88 89 90 15

91 92 93 94 95 96 16

Explorer: finding associations WEKA contains an implementation of the Apriori algorithm for learning association rules Works only with discrete data Can identify statistical dependencies between groups of attributes: milk, butter bread, eggs (with confidence 0.9 and support 2000) Apriori can compute all rules that have a given minimum support and exceed a given confidence 97 98 99 100 101 102 17

103 104 Explorer: attribute selection Panel that can be used to investigate which (subsets of) attributes are the most predictive ones Attribute selection methods contain two parts: A search method: best-first, forward selection, random, exhaustive, genetic algorithm, ranking An evaluation method: correlation-based, wrapper, information gain, chi-squared, Very flexible: WEKA allows (almost) arbitrary combinations of these two 105 106 107 108 18

109 110 111 112 Explorer: Data Visualization Visualization very useful in practice: e.g. helps to determine difficulty of the learning problem WEKA can visualize single attributes (1-d) and pairs of attributes (2-d) To do: rotating 3-d visualizations (Xgobi-style) Color-coded class values Jitter option to deal with nominal attributes (and to detect hidden data points) Zoom-in function 113 114 19

115 116 117 118 119 120 20

121 122 123 124 125 126 21

Performing Experiments Experimenter makes it easy to compare the performance of different learning schemes For classification and regression problems Results can be written into file or database Evaluation options: cross-validation, learning curve, holdout Can also iterate over different parameter settings Significance-testing built in! 127 128 129 130 131 132 22

133 134 135 136 137 138 23

139 140 The Knowledge Flow GUI New graphical user interface for WEKA Java-Beans-based interface for setting up and running machine learning experiments Data sources, classifiers, etc. are beans and can be connected graphically Data flows through components: e.g., data source -> filter -> classifier -> evaluator Layouts can be saved and loaded again later 141 142 143 144 24

145 146 147 148 149 150 25

151 152 153 154 155 156 26

157 158 159 160 Conclusion: try it yourself! WEKA is available at http://www.cs.waikato.ac.nz/ml/weka Also has a list of projects based on WEKA YALE has different interfaces and ideas behind but it also integrates all available DM techniques from WEKA 161 162 27

The following slides are compiled from screenshots and related descriptions available from YALE pages http://rapid-i.com/ YALE Yet Another Learning Environment Artificial Intelligence Unit of the University of Dortmund. Features of YALE freely available open-source knowledge discovery environment 100% pure Java (runs on every major platform and operating system) KD processes are modeled as simple operator trees which is both intuitive and powerful operator trees or subtrees can be saved as building blocks for later re-use internal XML representation ensures standardized interchange format of data mining experiments simple scripting language allowing for automatic largescale experiments multi-layered data view concept ensures efficient and transparent data handling 163 164 Features of YALE Flexibility in using YALE: graphical user interface (GUI) for interactive prototyping command line mode (batch mode) for automated large-scale applications Java API to ease usage of YALE from your own programs simple plugin and extension mechanisms, some plugins already exists and you can easily add your own powerful plotting facility offering a large set of sophisticated highdimensional visualization techniques for data and models more than 350 machine learning, evaluation, in- and output, pre- and post-processing, and visualization operators plus numerous meta optimization schemes machine learning library WEKA fully integrated YALE s potential application include text mining, multimedia mining, feature engineering, data stream mining and tracking drifting concepts, development of ensemble methods, and distributed data mining. 165 Experiment Setup the initial operator tree which only consist of a root node. The "Tree View" tab is the most often used editor for YALE experiments. Left: the current operator tree. Right: a table with the parameters of the currently selected operator. The lower part of the YALE main frame serves for displaying and viewing log and error messages. 166 After the learning operator "J48", a breakpoint indicates that the intermediate results can be inspected. Due to the modular concept of YALE, it is always possible to inspect and save intermediate results, e.g. the results for each individual run in a cross validation add new operators to the experiment: directly from the context menu of its parent. the new operator dialog shown in this screenshot. Several search constrains exist and a short description for each operator is shown 167 168 28

The operator trees are coded and represented by a simple XML format. The XML editor tab allows for fast and direct manipulations of the current experiment. All views can also be printed and exported to a wide range of graphic formats including jpg, png, ps and pdf. The "Box View" - is another viewer for YALE experiments. the box format is an intuitive way of representing the nesting of the operators. but editing is not possible 169 170 "Monitor" tab provides an overview of the currently used memory and is an important tool for large-scale data mining tasks on huge data sets. The amount of used memory during an experiment run can even be logged in the same way like all other provided logging values. Data can be imported from several file formats with the attribute editor. Other file formats like Arff, C45, csv, and dbase can be loaded with specialized operators. Attribute Editor can be used to create meta data descriptions from almost arbitrary file formats. These meta data descriptions can then be used for an input operator which actually loads the data. 171 172 Additional attributes (features) can easily be constructed from your data. YALE provides several approaches to construct the best feature space automatically. These approaches range from feature space transformations like PCA, GHA, ICA or the kernel versions to standard feature selection techniques to several evolutionary approaches for feature construction and extraction. 173 Help features to ease the learning phase for new users: An online tutorial, tool tip texts, a beginner and expert mode, operator info screens, a GUI manual, and the YALE tutorial. 174 29

Data Visualization Each time a data set is presented in the results tab (e.g. after loading it), several views appear: a meta data view describing all attributes, a data view showing the actual data and a plot view providing a large set of (high-dimensional) plotters for the data set at hand. The basic scatter plotter: Two of the attribute are used as axes, the class label attribute is used for colorization. The legend at the top maps the colors used to the classes or, in case of a real-valued color plot column, to the corresponding real values. 175 176 The standard scatter plotter even allows jittering, zooming, and displaying example ids. Doubleclicking a data point opens a visualizer. The standard example visualizer is presented here. 2D scatter plots can be put together to a scatter plot matrix where for all pairs of dimensions a usual scatter plot is drawn. This plotter is only available for less then 10 dimensions. For higher number of dimensions one of the other high-dimensional data plotter presented below should be used. 177 178 A 3D scatter plot exists similar to the colorized 2D scatter plot discussed above. The viewport can be rotated and zoomed to fit your needs. The built-in 2D and 3D plotters are a quick and easy way to view your numerical and nominal results, even as online plot at experiment runtime! SOM (Self-Organizing Map) plotter which uses a Kohonen net for dimensionality reduction. Plotting of the U-, the P-, and the U*-Matrix are supported with different color schemes. The data points can be colorized by one of the data columns, e.g. with the prediction label. 179 180 30

SOM (Self-Organizing Map) plotter which uses a Kohonen net for dimensionality reduction. a gray scale color scheme was used to plot the U- Matrix. The parallel plotter prints the axes of all dimensions parallel to each other. This is the natural visualization technique for series data but can also be useful for other types of data. The main advantage of parallel plots is that a very high number of dimensions can be visualized with this technique. The dimensions are colorized with the feature weights. The more yellow a dimension is marked, the more important this column is. 181 182 quartile plots (also known as box plots) are often used for experiment results like performance values but it is possible to summarize the statistical properties of data columns in general with this type of plot. Histogram plots (also known as distribution plots) 183 184 RadViz is another highdimensional data plotter where the data columns are placed as radial dimension anchors. Each data point is connected to each anchor with a spring corresponding to the feature values. This will lead to a fixed position in the two-dimensional plane. Again, weights are used to mark the more important columns. A survey plot is a sort of vertical histogram matrix also suitable for a large number of dimensions. Each line corresponds to one data point and can be colorized by one of the columns. The length of each section corresponds to the value of the data point for that dimension. For up to three dimensions the order of the histograms can be selected. 185 186 31

Visualization of Models and other Results Andrews curves are another way of visualizing highdimensional data. Each data point is projected onto a set of orthogonal trigonometric functions and displayed as a curve. It is known that Andrews curves preserve distances, so they have many uses for data analysis and exploration. Often outliers and hidden patterns can be well detected in these plots. The result of a learning step is called model. Some models provide a graphical representation of the learned hypothesis. This screenshot presents a learned decision tree for the widely known "labor negotiations" data set from the UCI repository. Results like learned models, performance values, data sets or selected attributes are displayed when the experiment is completed or a breakpoint is reached 187 188 In cases where no graphical representation of a learned model is available, at least a textual description of the learned model is presented. In this screenshot you see a Stacking model consisting of a rule model (the upper half) and a neural network (starts at the lower half). Both base models are described by simple and understandable texts. This is a density plot (similar to a contour plot) of the decision function of a Support Vector Machine (SVM). Almost all SVM implementations in YALE provide a table and a plot view of the learned model. In this screenshot, red points refer to support vectors, blue points to normal training examples. Bluish regions will be predicted negative, reddish regions will be predicted positive. 189 190 only the support vectors are shown colorized by the preditcted function value for the corresponding data point. Examples on the red side will be predicted positive; examples on the blue side will be predicted negative. There is a perfectly linear separation in two of the dimensions and it seems to be that the parameters were not chosen optimal since the number of support vectors is rather high. alpha values (Lagrange multipliers) of the SVM are plotted against the function values and colorized with the true label. We applied a slight jittering to make more points visible. This model seems to be "well-learned", since only few points have a alpha value not equal to zero and these are the points with function values approximately 0. 191 192 32

This surface plot presents the result of a meta optimization experiment: the parameters of one of the operators are optimized. the plot can be rotated and zoomed. 193 194 195 196 WEKA & YALE Comparison You tell me in your report Now lets go through the first assignment 1 st Assignment http://www.cs.jyu.fi/~mpechen/ties443/tutorials/assig nment1.pdf My advise for you is to come back to this assignment and WEKA and YALE tools after each forthcoming lecture to see how the things are implemented and can be used in practice. 197 198 33