TITANIC. Predicting Survival Using Classification Algorithms
|
|
- Antony Derick Cox
- 6 years ago
- Views:
Transcription
1 TITANIC Predicting Survival Using Classification Algorithms
2 1 Nicholas King IE May 2016
3 PROJECT OVERVIEW > Historical Background ### > Project Intent > Data: Target and Feature Variables > Initial Exporation & Feature Engineering > SVM Models > Logistic Regression Model > Decision Tree Model > Random Forest Model > k-nearest Neighbors & Naive Bayes Model > Summary: Imporant Variables and Future Steps ### ### ### ### ### ### ### ### ### 2
4 3
5 HISTORICAL BACKGROUND Over a century ago, on April 15, 1912, one of the greatest shipwrecks in history occured. The RMS Titanic was the largest ship alfoat and billed as unsinkable. It sank on its maiden voyage to the Port of New York, somewhere off the coast of Canada in the icy waters of the Atlantic. Despite several reports saying Captain Edward John Smith was warned to avoid the area due to icebergs, the Titanic plowed ahead (some say under excessive speeds) until it was too late and an iceberg dealt a glancing blow to the ship s hull. Instantly the ship began taking on water and sinking rapidly, just before midnight. By 2:20am, with hundreds of people still on board, the ship plunged beneath the waves. Despite the repeated distress calls and flares launched, the first rescue ship, the RMS Carpathia, arrived nearly two hours later, pulling more than 700 people from the water. A lack of lifeboats further contributed to the disaster s death toll. During the pandemonium and chaos several lifeboats were launched at only half capacity, while others just floated away. Women and children were saved first, meaning the greatest number of deaths of the disaster were male. Out of roughly 2223 passengers and crew aboard the Titanic, 1500 died. The Titanic sank in the icy waters of the Atlantic Ocean and descended two miles to the ocean floor and went undiscovered for decades. In 1985 it was discovered off the coast of Newfoundland by ocean explorer Robert Ballard. 4
6 PROJECT INTENT This study was done with the purpose of using machine learning and classification to analyze what kinds of people were more likely to survive the Titanic disaster. The types of classification methods that were evaluated were: - Support Vector Machine (Linear and Radial) - Logistic Regression - Decision Tree - Random Forest 5
7 THE DATA Variable Descriptions: Notes: - Survival: Survival (0 = No; 1 = Yes) TARGET VARIABLE - Pclass: Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) - Name: Name - Sex: Sex - Age: Age - SibSp: Number of Siblings/Spouses Aboard - Parch: Number of Parents/Children Aboard - Ticket: Ticket Number - Fare: Passenger Fare - Cabin: Cabin - Embarked: Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) -Pclass is a proxy for socio-economic status (SES): 1st = Upper; 2nd = Middle; 3rd = Lower -With respect to the family relation variables (i.e. SibSp and Parch) some relations were ignored. The following are the definitions used for Sibsp and Parch. - Sibling: Brother, Sister, Stepbrother, or Stepsister of Passenger Aboard Titanic - Spouse: Husband or Wife of Passenger Aboard Titanic (Mistresses and Fiancés Ignored) - Parent: Mother or Father of Passenger Aboard Titanic - Child: Son, Daughter, Stepson, or Stepdaughter of Passenger Aboard Titanic 6
8 THE DATA Target variable Variables ignored (Name, Ticket Number, and Cabin are all unique identifiers) ***Data was imputed to make a complete dataset and run some models*** 7
9 John Jacob Jack Astor IV (July 13, 1864 April 15, 1912) was an American businessman, real estate builder, investor, inventor, writer, lieutenant colonel in the Spanish American War, and a prominent member of the Astor family. He was the richest passenger aboard the Titanic and was thought to be among the richest people in the world at that time, with a net worth of nearly $87 million when he died (equivalent to $2.13 billion in 2015). 8
10 INITIAL EXPLORATION & FEATURE ENGINEERING 9
11 SUPPORT VECTOR MACHINES 10
12 LOGISTIC REGRESSION A logistic regression is a statistical method for analyzing a dataset in which there are two or more independent variables (in our case, 10) that determine an outcome. The outcome is measured by the dichotomous variable, Survived, of which the outcome is either 0 or 1. From the output of the logistic regression we can see what coefficients are considered significant by their P-values. Here we see that Pclass, Sex, Age, and SibSp all have extremely small P-values and thus play an important role in the predictions. A common way to represent the accuracy of a logistic regression is to use a receiver operating characteristic, or ROC, curve. The ROC curve illustrates the performance of a binary classifier system. The curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR). The true-positive rate is also known as sensitivity, or recall in machine learning. The false-positive rate is also known as the fall-out and can be calculated as (1 - specificity). The area under the curve, or AUC is frequently used for model comparison, the higher the AUC, the better. 11
13 LOGISTIC REGRESSION AUC = ~85% 12
14
15 DECISION TREE Decision trees are great because they re intuitive and can be read by people with little experience in machine learning after a brief exploration. In a decision tree, the algorithm starts with all of the data at the root node (top box) and scans all of the variables to find the best one to split on. The decision tree completed is read from the root node; at the top it reflects the fact that 62% of passengers die, and 38% live. Moving down the branches of the decision tree we see that if the passenger was a male (moving left on the branch), they had a 19% chance of survival, and represented 65% of the passengers Yes No Sex = male 14
16 DECISION TREE The final nodes at the bottom of the decision tree are known as terminal nodes. After all the boolean choices have been made for a given passenger, they will end up in one of the terminal nodes, and the majority vote in that bucket determine the prediction for new passengers who s fate is unknown. Remember from Logistic Regression that we found the most important variables were Sex, Age, Pclass, and SibSp. So the decision tree agrees. 15
17 RANDOM FOREST Random forests get past the overfitting problems with decision trees. If we take a large collection of individually imperfect models, their mistakes will not be made by the rest of the models, and so we re able to average out the results of these models to get a superior model. The combination of models typically ends up being better than an individual model. Since the formulas for building a single decision tree are the same every time, some source of randomness is required to make these trees different from one another. Through these sources of randomness, the ensemble contains a collection of totally unique trees which all make their classifications differently. Each tree is called to make a classification for a given passenger, the votes are tallied (with possibly hundreds, or thousands of trees) and the majority decision is chosen. Since each tree is grown out fully, they each overfit, but in different ways. Thus the mistakes one makes will be averaged out over them all. Missing values have to be cleaned up in order to use a random forest. For a couple missing values in the Fare variable, we can safely use the median fare value from the data to fill this information in. There are a substantial amount of of missing variables in the Age variable, however. To fill in these missing values we can use a decision tree like we previously did, however this time we specify the classification method as anova instead of class like we previously did, since we are trying to predict a continuous variable and not a categorical one. The following plot can tell us which variables are important. 16
18 RANDOM FOREST There are two types of importance measures shown. The Mean Decrease Accuracy tests to see how worse the model performs without each variable, so a high decrease in accuracy would be expected for very predictive variables. The Mean Decrease Gini plot goes into the mathematics behind decision trees, but basically measures how pure the nodes are at the end of the tree. Again, it tests to see the result if each variable is taken out; a high score indicates the variable was important. Once again we re seeing how important the variables Sex, Pclass, and Age are in determining survival. 17
19 18
20 k-nearest NEIGHBORS & NAIVE BAYES The knn model is another algorithm for object classification that is widely used in data science and analytics. The algorithm is much more simple and intuitive than previous models. To run the knn model in R we also need to have complete training and testing datasets with no missing values, similar to the Random Forest model. In R the knn is classification for the test set from the training set. For each row of the test set, the k-nearest (in Euclidean distance) training set vectors are found, and the classification is decided by majority vote, with ties broken at random. If there are ties for the kth nearest vector, all candidates are included in the vote. Typically the value of k is determined by taking the square root of the number of features. In our case we are classifying based on seven features, so I chose k = 3. In regards to a k value, the choice of k can be critical - a small value of k means that noise will have a higher influence on the result. A large value makes it computationally expensive and kind of defeats the basic purpose behind knn (that points that are near might have similar densities or classes). In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes theorem with strong (naive) independence assumptions between the features.all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable. In R, the naivebayes function computes the conditional a-posterior probabilities of a categorical class variable given independent predictor variables using the Bayes rule. 19
21 SUMMARY As previously shown, the three most important variables in predicting survival consistently appear to be Sex, Pclass, and Age. Based on the each model it can also be shown that SibSp and Fare played a rather important role. Based on results submitted to Kaggle, the original decision tree finished with the highest accuracy, 78.5%, placing 1503 out of 3916 total entries. This was good enough to beat out 62% of other competitors. A minor difference in accuracy can represent a major difference in ranking. Some of my model accuracies were reported higher within R than what my Kaggle submission indicated. This is most likely due to overfitting. The classifier fits more the training data and thus fails to give the same accuracy on the test data. My next goal would be to work further with the Random Forest model as I feel it has the most potential for increasing my submission results. As this study has already indicated, a Random Forest is an aggregation of hundreds or more decision trees - thus with some more feature engineering and model tuning one would expect the Random Forest to give a higher accuracy. In my model within the R Studio interface I was able to achieve an accuracy of 84% using a Random Forest, so clearly the model is overfitting to the training data in some ways. 20
22 SUMMARY A bar plot showing the associated accuracies of each model/ algorithm tested. The original decision tree performed the best, while the k-nearest Neighbors model performed the worst. The highest Kaggle ranking achieved was #1503 out of
23
24
Fuzzy Rogers Research Computing Administrator Materials Research Laboratory (MRL) Center for Scientific Computing (CSC)
Intro to R Fuzzy Rogers Research Computing Administrator Materials Research Laboratory (MRL) Center for Scientific Computing (CSC) fuz@mrl.ucsb.edu MRL 2066B Sharon Solis Paul Weakliem Research Computing
More informationSupervised Learning Classification Algorithms Comparison
Supervised Learning Classification Algorithms Comparison Aditya Singh Rathore B.Tech, J.K. Lakshmipat University -------------------------------------------------------------***---------------------------------------------------------
More informationCOMP 364: Computer Tools for Life Sciences
COMP 364: Computer Tools for Life Sciences Intro to machine learning with scikit-learn Christopher J.F. Cameron and Carlos G. Oliver 1 / 1 Key course information Assignment #4 available now due Monday,
More informationFigure 3.20: Visualize the Titanic Dataset
80 Chapter 3. Data Mining with Azure Machine Learning Studio Figure 3.20: Visualize the Titanic Dataset 3. After verifying the output, we will cast categorical values to the corresponding columns. To begin,
More informationHomework: Data Mining
: Data Mining This homework sheet will test your knowledge of data mining using R. 3 a) Load the files Titanic.csv into R as follows. This dataset provides information on the survival of the passengers
More informationClassification Algorithms in Data Mining
August 9th, 2016 Suhas Mallesh Yash Thakkar Ashok Choudhary CIS660 Data Mining and Big Data Processing -Dr. Sunnie S. Chung Classification Algorithms in Data Mining Deciding on the classification algorithms
More informationPerformance Evaluation of Various Classification Algorithms
Performance Evaluation of Various Classification Algorithms Shafali Deora Amritsar College of Engineering & Technology, Punjab Technical University -----------------------------------------------------------***----------------------------------------------------------
More informationLouis Fourrier Fabien Gaie Thomas Rolf
CS 229 Stay Alert! The Ford Challenge Louis Fourrier Fabien Gaie Thomas Rolf Louis Fourrier Fabien Gaie Thomas Rolf 1. Problem description a. Goal Our final project is a recent Kaggle competition submitted
More informationBusiness Club. Decision Trees
Business Club Decision Trees Business Club Analytics Team December 2017 Index 1. Motivation- A Case Study 2. The Trees a. What is a decision tree b. Representation 3. Regression v/s Classification 4. Building
More informationExercise 3. AMTH/CPSC 445a/545a - Fall Semester October 7, 2017
Exercise 3 AMTH/CPSC 445a/545a - Fall Semester 2016 October 7, 2017 Problem 1 Compress your solutions into a single zip file titled assignment3.zip, e.g. for a student named Tom
More informationCS6375: Machine Learning Gautam Kunapuli. Mid-Term Review
Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes
More informationLecture 25: Review I
Lecture 25: Review I Reading: Up to chapter 5 in ISLR. STATS 202: Data mining and analysis Jonathan Taylor 1 / 18 Unsupervised learning In unsupervised learning, all the variables are on equal standing,
More informationSUPERVISED LEARNING METHODS. Stanley Liang, PhD Candidate, Lassonde School of Engineering, York University Helix Science Engagement Programs 2018
SUPERVISED LEARNING METHODS Stanley Liang, PhD Candidate, Lassonde School of Engineering, York University Helix Science Engagement Programs 2018 2 CHOICE OF ML You cannot know which algorithm will work
More informationPredictive modelling / Machine Learning Course on Big Data Analytics
Predictive modelling / Machine Learning Course on Big Data Analytics Roberta Turra, Cineca 19 September 2016 Going back to the definition of data analytics process of extracting valuable information from
More informationMIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018
MIT 801 [Presented by Anna Bosman] 16 February 2018 Machine Learning What is machine learning? Artificial Intelligence? Yes as we know it. What is intelligence? The ability to acquire and apply knowledge
More informationPrinciples of Machine Learning
Principles of Machine Learning Lab 3 Improving Machine Learning Models Overview In this lab you will explore techniques for improving and evaluating the performance of machine learning models. You will
More informationLab and Assignment Activity
Lab and Assignment Activity 1 Introduction Sometime ago, a Titanic dataset was released to the general public. This file is given to you as titanic_data.csv. This data is in text format and contains 12
More informationData Mining Lecture 8: Decision Trees
Data Mining Lecture 8: Decision Trees Jo Houghton ECS Southampton March 8, 2019 1 / 30 Decision Trees - Introduction A decision tree is like a flow chart. E. g. I need to buy a new car Can I afford it?
More informationRandom Forest A. Fornaser
Random Forest A. Fornaser alberto.fornaser@unitn.it Sources Lecture 15: decision trees, information theory and random forests, Dr. Richard E. Turner Trees and Random Forests, Adele Cutler, Utah State University
More informationEvaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München
Evaluation Measures Sebastian Pölsterl Computer Aided Medical Procedures Technische Universität München April 28, 2015 Outline 1 Classification 1. Confusion Matrix 2. Receiver operating characteristics
More informationIntroduction to Automated Text Analysis. bit.ly/poir599
Introduction to Automated Text Analysis Pablo Barberá School of International Relations University of Southern California pablobarbera.com Lecture materials: bit.ly/poir599 Today 1. Solutions for last
More informationPredicting Rare Failure Events using Classification Trees on Large Scale Manufacturing Data with Complex Interactions
2016 IEEE International Conference on Big Data (Big Data) Predicting Rare Failure Events using Classification Trees on Large Scale Manufacturing Data with Complex Interactions Jeff Hebert, Texas Instruments
More informationEvaluation of different biological data and computational classification methods for use in protein interaction prediction.
Evaluation of different biological data and computational classification methods for use in protein interaction prediction. Yanjun Qi, Ziv Bar-Joseph, Judith Klein-Seetharaman Protein 2006 Motivation Correctly
More informationSalford Systems Predictive Modeler Unsupervised Learning. Salford Systems
Salford Systems Predictive Modeler Unsupervised Learning Salford Systems http://www.salford-systems.com Unsupervised Learning In mainstream statistics this is typically known as cluster analysis The term
More informationA toolkit for stability assessment of tree-based learners
A toolkit for stability assessment of tree-based learners Michel Philipp, University of Zurich, Michel.Philipp@psychologie.uzh.ch Achim Zeileis, Universität Innsbruck, Achim.Zeileis@R-project.org Carolin
More informationMachine Learning with MATLAB --classification
Machine Learning with MATLAB --classification Stanley Liang, PhD York University Classification the definition In machine learning and statistics, classification is the problem of identifying to which
More informationIn this tutorial we will see some of the basic operations on data frames in R. We begin by first importing the data into an R object called train.
Data frames in R In this tutorial we will see some of the basic operations on data frames in R Understand the structure Indexing Column names Add a column/row Delete a column/row Subset Summarize We will
More informationLecture 9: Support Vector Machines
Lecture 9: Support Vector Machines William Webber (william@williamwebber.com) COMP90042, 2014, Semester 1, Lecture 8 What we ll learn in this lecture Support Vector Machines (SVMs) a highly robust and
More informationMIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA
Exploratory Machine Learning studies for disruption prediction on DIII-D by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA Presented at the 2 nd IAEA Technical Meeting on
More informationApplying Supervised Learning
Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains
More informationNearest neighbor classification DSE 220
Nearest neighbor classification DSE 220 Decision Trees Target variable Label Dependent variable Output space Person ID Age Gender Income Balance Mortgag e payment 123213 32 F 25000 32000 Y 17824 49 M 12000-3000
More informationData Science Course Content
CHAPTER 1: INTRODUCTION TO DATA SCIENCE Data Science Course Content What is the need for Data Scientists Data Science Foundation Business Intelligence Data Analysis Data Mining Machine Learning Difference
More informationEvaluation Metrics. (Classifiers) CS229 Section Anand Avati
Evaluation Metrics (Classifiers) CS Section Anand Avati Topics Why? Binary classifiers Metrics Rank view Thresholding Confusion Matrix Point metrics: Accuracy, Precision, Recall / Sensitivity, Specificity,
More informationDistribution-free Predictive Approaches
Distribution-free Predictive Approaches The methods discussed in the previous sections are essentially model-based. Model-free approaches such as tree-based classification also exist and are popular for
More informationOverview. Data Mining for Business Intelligence. Shmueli, Patel & Bruce
Overview Data Mining for Business Intelligence Shmueli, Patel & Bruce Galit Shmueli and Peter Bruce 2010 Core Ideas in Data Mining Classification Prediction Association Rules Data Reduction Data Exploration
More informationR (2) Data analysis case study using R for readily available data set using any one machine learning algorithm.
Assignment No. 4 Title: SD Module- Data Science with R Program R (2) C (4) V (2) T (2) Total (10) Dated Sign Data analysis case study using R for readily available data set using any one machine learning
More informationKeywords- Classification algorithm, Hypertensive, K Nearest Neighbor, Naive Bayesian, Data normalization
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES APPLICATION OF CLASSIFICATION TECHNIQUES TO DETECT HYPERTENSIVE HEART DISEASE Tulasimala B. N* 1, Elakkiya S 2 & Keerthana N 3 *1 Assistant Professor,
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationThe exam is closed book, closed notes except your one-page cheat sheet.
CS 189 Fall 2015 Introduction to Machine Learning Final Please do not turn over the page before you are instructed to do so. You have 2 hours and 50 minutes. Please write your initials on the top-right
More informationRELATIONSHIP TO PROBAND (RELATE)
RELATIONSHIP TO PROBAND (RELATE) Release 3.1 December 1997 - ii - RELATE Table of Contents 1 Changes Since Last Release... 1 2 Purpose... 3 3 Limitations... 5 3.1 Command Line Parameters... 5 4 Theory...
More informationHands-on Machine Learning for Cybersecurity
Hands-on Machine Learning for Cybersecurity James Walden 1 1 Center for Information Security Northern Kentucky University 11th Annual NKU Cybersecurity Symposium Highland Heights, KY October 11, 2018 Topics
More informationTour-Based Mode Choice Modeling: Using An Ensemble of (Un-) Conditional Data-Mining Classifiers
Tour-Based Mode Choice Modeling: Using An Ensemble of (Un-) Conditional Data-Mining Classifiers James P. Biagioni Piotr M. Szczurek Peter C. Nelson, Ph.D. Abolfazl Mohammadian, Ph.D. Agenda Background
More informationPredicting Messaging Response Time in a Long Distance Relationship
Predicting Messaging Response Time in a Long Distance Relationship Meng-Chen Shieh m3shieh@ucsd.edu I. Introduction The key to any successful relationship is communication, especially during times when
More information6.034 Design Assignment 2
6.034 Design Assignment 2 April 5, 2005 Weka Script Due: Friday April 8, in recitation Paper Due: Wednesday April 13, in class Oral reports: Friday April 15, by appointment The goal of this assignment
More informationTutorials Case studies
1. Subject Three curves for the evaluation of supervised learning methods. Evaluation of classifiers is an important step of the supervised learning process. We want to measure the performance of the classifier.
More informationFitting Classification and Regression Trees Using Statgraphics and R. Presented by Dr. Neil W. Polhemus
Fitting Classification and Regression Trees Using Statgraphics and R Presented by Dr. Neil W. Polhemus Classification and Regression Trees Machine learning methods used to construct predictive models from
More informationBig Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1
Big Data Methods Chapter 5: Machine learning Big Data Methods, Chapter 5, Slide 1 5.1 Introduction to machine learning What is machine learning? Concerned with the study and development of algorithms that
More informationText Categorization. Foundations of Statistic Natural Language Processing The MIT Press1999
Text Categorization Foundations of Statistic Natural Language Processing The MIT Press1999 Outline Introduction Decision Trees Maximum Entropy Modeling (optional) Perceptrons K Nearest Neighbor Classification
More informationSpecial Topic: Missing Values. Missing Can Mean Many Things. Missing Values Common in Real Data
Special Topic: Missing Values Missing Values Common in Real Data Pneumonia: 6.3% of attribute values are missing one attribute is missing in 61% of cases C-Section: only about 1/2% of attribute values
More informationk-nearest Neighbor (knn) Sept Youn-Hee Han
k-nearest Neighbor (knn) Sept. 2015 Youn-Hee Han http://link.koreatech.ac.kr ²Eager Learners Eager vs. Lazy Learning when given a set of training data, it will construct a generalization model before receiving
More informationEvaluating Classifiers
Evaluating Classifiers Charles Elkan elkan@cs.ucsd.edu January 18, 2011 In a real-world application of supervised learning, we have a training set of examples with labels, and a test set of examples with
More informationCS249: ADVANCED DATA MINING
CS249: ADVANCED DATA MINING Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu April 24, 2017 Homework 2 out Announcements Due May 3 rd (11:59pm) Course project proposal
More informationUsing Machine Learning to Identify Security Issues in Open-Source Libraries. Asankhaya Sharma Yaqin Zhou SourceClear
Using Machine Learning to Identify Security Issues in Open-Source Libraries Asankhaya Sharma Yaqin Zhou SourceClear Outline - Overview of problem space Unidentified security issues How Machine Learning
More informationData Preprocessing. Supervised Learning
Supervised Learning Regression Given the value of an input X, the output Y belongs to the set of real values R. The goal is to predict output accurately for a new input. The predictions or outputs y are
More informationClassification: Feature Vectors
Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free YOUR_NAME MISSPELLED FROM_FRIEND... : : : : 2 0 2 0 PIXEL 7,12
More informationCSE 573: Artificial Intelligence Autumn 2010
CSE 573: Artificial Intelligence Autumn 2010 Lecture 16: Machine Learning Topics 12/7/2010 Luke Zettlemoyer Most slides over the course adapted from Dan Klein. 1 Announcements Syllabus revised Machine
More informationCPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016
CPSC 340: Machine Learning and Data Mining Non-Parametric Models Fall 2016 Admin Course add/drop deadline tomorrow. Assignment 1 is due Friday. Setup your CS undergrad account ASAP to use Handin: https://www.cs.ubc.ca/getacct
More informationContents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation
Contents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation Learning 4 Supervised Learning 4 Unsupervised Learning 4
More information2. (a) Briefly discuss the forms of Data preprocessing with neat diagram. (b) Explain about concept hierarchy generation for categorical data.
Code No: M0502/R05 Set No. 1 1. (a) Explain data mining as a step in the process of knowledge discovery. (b) Differentiate operational database systems and data warehousing. [8+8] 2. (a) Briefly discuss
More informationApplication of Additive Groves Ensemble with Multiple Counts Feature Evaluation to KDD Cup 09 Small Data Set
Application of Additive Groves Application of Additive Groves Ensemble with Multiple Counts Feature Evaluation to KDD Cup 09 Small Data Set Daria Sorokina Carnegie Mellon University Pittsburgh PA 15213
More informationPart II: A broader view
Part II: A broader view Understanding ML metrics: isometrics, basic types of linear isometric plots linear metrics and equivalences between them skew-sensitivity non-linear metrics Model manipulation:
More informationIntroduction to Data Science. Introduction to Data Science with Python. Python Basics: Basic Syntax, Data Structures. Python Concepts (Core)
Introduction to Data Science What is Analytics and Data Science? Overview of Data Science and Analytics Why Analytics is is becoming popular now? Application of Analytics in business Analytics Vs Data
More informationDecision trees. Decision trees are useful to a large degree because of their simplicity and interpretability
Decision trees A decision tree is a method for classification/regression that aims to ask a few relatively simple questions about an input and then predicts the associated output Decision trees are useful
More information2.1 Objectives. Math Chapter 2. Chapter 2. Variable. Categorical Variable EXPLORING DATA WITH GRAPHS AND NUMERICAL SUMMARIES
EXPLORING DATA WITH GRAPHS AND NUMERICAL SUMMARIES Chapter 2 2.1 Objectives 2.1 What Are the Types of Data? www.managementscientist.org 1. Know the definitions of a. Variable b. Categorical versus quantitative
More informationData Mining and Knowledge Discovery Practice notes 2
Keywords Data Mining and Knowledge Discovery: Practice Notes Petra Kralj Novak Petra.Kralj.Novak@ijs.si Data Attribute, example, attribute-value data, target variable, class, discretization Algorithms
More informationChuck Cartledge, PhD. 23 September 2017
Introduction K-Nearest Neighbors Na ıve Bayes Hands-on Q&A Conclusion References Files Misc. Big Data: Data Analysis Boot Camp Classification with K-Nearest Neighbors and Na ıve Bayes Chuck Cartledge,
More informationFeature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate
More informationData Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners
Data Mining 3.5 (Instance-Based Learners) Fall 2008 Instructor: Dr. Masoud Yaghini Outline Introduction k-nearest-neighbor Classifiers References Introduction Introduction Lazy vs. eager learning Eager
More information10/5/2017 MIST.6060 Business Intelligence and Data Mining 1. Nearest Neighbors. In a p-dimensional space, the Euclidean distance between two records,
10/5/2017 MIST.6060 Business Intelligence and Data Mining 1 Distance Measures Nearest Neighbors In a p-dimensional space, the Euclidean distance between two records, a = a, a,..., a ) and b = b, b,...,
More informationEvaluation. Evaluate what? For really large amounts of data... A: Use a validation set.
Evaluate what? Evaluation Charles Sutton Data Mining and Exploration Spring 2012 Do you want to evaluate a classifier or a learning algorithm? Do you want to predict accuracy or predict which one is better?
More informationNetwork Traffic Measurements and Analysis
DEIB - Politecnico di Milano Fall, 2017 Sources Hastie, Tibshirani, Friedman: The Elements of Statistical Learning James, Witten, Hastie, Tibshirani: An Introduction to Statistical Learning Andrew Ng:
More informationADVANCED CLASSIFICATION TECHNIQUES
Admin ML lab next Monday Project proposals: Sunday at 11:59pm ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 Fall 2014 Project proposal presentations Machine Learning: A Geometric View 1 Apples
More informationPredicting Gene Function and Localization
Predicting Gene Function and Localization By Ankit Kumar and Raissa Largman CS 229 Fall 2013 I. INTRODUCTION Our data comes from the 2001 KDD Cup Data Mining Competition. The competition had two tasks,
More informationEvaluating Classifiers
Evaluating Classifiers Reading for this topic: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website) Evaluating Classifiers What we want: Classifier that best predicts
More informationPredict the Likelihood of Responding to Direct Mail Campaign in Consumer Lending Industry
Predict the Likelihood of Responding to Direct Mail Campaign in Consumer Lending Industry Jincheng Cao, SCPD Jincheng@stanford.edu 1. INTRODUCTION When running a direct mail campaign, it s common practice
More informationData Mining and Knowledge Discovery: Practice Notes
Data Mining and Knowledge Discovery: Practice Notes Petra Kralj Novak Petra.Kralj.Novak@ijs.si 8.11.2017 1 Keywords Data Attribute, example, attribute-value data, target variable, class, discretization
More informationOn Classification: An Empirical Study of Existing Algorithms Based on Two Kaggle Competitions
On Classification: An Empirical Study of Existing Algorithms Based on Two Kaggle Competitions CAMCOS Report Day December 9th, 2015 San Jose State University Project Theme: Classification The Kaggle Competition
More informationRENR690-SDM. Zihaohan Sang. March 26, 2018
RENR690-SDM Zihaohan Sang March 26, 2018 Intro This lab will roughly go through the key processes of species distribution modeling (SDM), from data preparation, model fitting, to final evaluation. In this
More information6.034 Quiz 2, Spring 2005
6.034 Quiz 2, Spring 2005 Open Book, Open Notes Name: Problem 1 (13 pts) 2 (8 pts) 3 (7 pts) 4 (9 pts) 5 (8 pts) 6 (16 pts) 7 (15 pts) 8 (12 pts) 9 (12 pts) Total (100 pts) Score 1 1 Decision Trees (13
More informationExtra readings beyond the lecture slides are important:
1 Notes To preview next lecture: Check the lecture notes, if slides are not available: http://web.cse.ohio-state.edu/~sun.397/courses/au2017/cse5243-new.html Check UIUC course on the same topic. All their
More informationComparing Implementations of Optimal Binary Search Trees
Introduction Comparing Implementations of Optimal Binary Search Trees Corianna Jacoby and Alex King Tufts University May 2017 In this paper we sought to put together a practical comparison of the optimality
More informationLecture 27: Review. Reading: All chapters in ISLR. STATS 202: Data mining and analysis. December 6, 2017
Lecture 27: Review Reading: All chapters in ISLR. STATS 202: Data mining and analysis December 6, 2017 1 / 16 Final exam: Announcements Tuesday, December 12, 8:30-11:30 am, in the following rooms: Last
More informationIntroducing Categorical Data/Variables (pp )
Notation: Means pencil-and-paper QUIZ Means coding QUIZ Definition: Feature Engineering (FE) = the process of transforming the data to an optimal representation for a given application. Scaling (see Chs.
More informationStatistics 202: Statistical Aspects of Data Mining
Statistics 202: Statistical Aspects of Data Mining Professor Rajan Patel Lecture 9 = More of Chapter 5 Agenda: 1) Lecture over more of Chapter 5 1 Introduction to Data Mining by Tan, Steinbach, Kumar Chapter
More informationPerformance Analysis of Data Mining Classification Techniques
Performance Analysis of Data Mining Classification Techniques Tejas Mehta 1, Dr. Dhaval Kathiriya 2 Ph.D. Student, School of Computer Science, Dr. Babasaheb Ambedkar Open University, Gujarat, India 1 Principal
More informationCART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology
CART Classification and Regression Trees Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology CART CART stands for Classification And Regression Trees.
More informationThe Basics of Decision Trees
Tree-based Methods Here we describe tree-based methods for regression and classification. These involve stratifying or segmenting the predictor space into a number of simple regions. Since the set of splitting
More informationCS145: INTRODUCTION TO DATA MINING
CS145: INTRODUCTION TO DATA MINING 08: Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu October 24, 2017 Learnt Prediction and Classification Methods Vector Data
More informationUse of Synthetic Data in Testing Administrative Records Systems
Use of Synthetic Data in Testing Administrative Records Systems K. Bradley Paxton and Thomas Hager ADI, LLC 200 Canal View Boulevard, Rochester, NY 14623 brad.paxton@adillc.net, tom.hager@adillc.net Executive
More informationCS178: Machine Learning and Data Mining. Complexity & Nearest Neighbor Methods
+ CS78: Machine Learning and Data Mining Complexity & Nearest Neighbor Methods Prof. Erik Sudderth Some materials courtesy Alex Ihler & Sameer Singh Machine Learning Complexity and Overfitting Nearest
More informationISyE 6416 Basic Statistical Methods - Spring 2016 Bonus Project: Big Data Analytics Final Report. Team Member Names: Xi Yang, Yi Wen, Xue Zhang
ISyE 6416 Basic Statistical Methods - Spring 2016 Bonus Project: Big Data Analytics Final Report Team Member Names: Xi Yang, Yi Wen, Xue Zhang Project Title: Improve Room Utilization Introduction Problem
More informationMath 101 Final Exam Study Notes:
Math 101 Final Exam Study Notes: *Please remember there is a large set of final exam review problems in Doc Sharing (under Course Tools in MLP). Highlighted are what might be considered formulas* I. Graph
More informationSPM Users Guide. This guide elaborates on powerful ways to combine the TreeNet and GPS engines to achieve model compression and more.
SPM Users Guide Model Compression via ISLE and RuleLearner This guide elaborates on powerful ways to combine the TreeNet and GPS engines to achieve model compression and more. Title: Model Compression
More informationLecture outline. Decision-tree classification
Lecture outline Decision-tree classification Decision Trees Decision tree A flow-chart-like tree structure Internal node denotes a test on an attribute Branch represents an outcome of the test Leaf nodes
More information2. On classification and related tasks
2. On classification and related tasks In this part of the course we take a concise bird s-eye view of different central tasks and concepts involved in machine learning and classification particularly.
More informationProbabilistic Classifiers DWML, /27
Probabilistic Classifiers DWML, 2007 1/27 Probabilistic Classifiers Conditional class probabilities Id. Savings Assets Income Credit risk 1 Medium High 75 Good 2 Low Low 50 Bad 3 High Medium 25 Bad 4 Medium
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationCPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016
CPSC 340: Machine Learning and Data Mining Non-Parametric Models Fall 2016 Assignment 0: Admin 1 late day to hand it in tonight, 2 late days for Wednesday. Assignment 1 is out: Due Friday of next week.
More informationData Mining and Knowledge Discovery Practice notes: Numeric Prediction, Association Rules
Keywords Data Mining and Knowledge Discovery: Practice Notes Petra Kralj Novak Petra.Kralj.Novak@ijs.si Data Attribute, example, attribute-value data, target variable, class, discretization Algorithms
More informationData mining. Classification k-nn Classifier. Piotr Paszek. (Piotr Paszek) Data mining k-nn 1 / 20
Data mining Piotr Paszek Classification k-nn Classifier (Piotr Paszek) Data mining k-nn 1 / 20 Plan of the lecture 1 Lazy Learner 2 k-nearest Neighbor Classifier 1 Distance (metric) 2 How to Determine
More information