The TWIX Package. April 24, 2007

Size: px
Start display at page:

Download "The TWIX Package. April 24, 2007"

Transcription

1 The TWIX Package April 24, 2007 Version Title Trees WIth extra splits Author Sergej Potapov Maintainer Martin Theus Depends R (>= 1.8.0), rpart, iplots Trees with extra splits Date License Standard GNU public license URL R topics documented: Dev.leaf Devplot bagg.default boottwix deviance.twix export fullrks get.splitvar get.tree idevplot nlogn olives plot.twix pred.value predict.twix print.id.tree print.single.tree print.twix

2 2 Devplot scree.plot sp.slave splitt summary.twix tunetwix TWIX Index 20 Dev.leaf Internal functions of TWIX. Internal functions of package TWIX. Dev.leaf(x) Details These are not to be called by the user. Devplot Deviance plot Deviance plot. Devplot(rsp, x,interactiv=false,sample=c(false,0),col=1,classes=false,pch=16,...) rsp response variable. x a dataframe of predictor variables. interactiv see idevplot. sample a dataframe of predictor variables. col the color for points. classes Scatterplot of Classes. pch a vector of plotting characters or symbols.... other parameters to be passed through to plotting functions.

3 bagg.default 3 idevplot,plot.twix,twix #Devplot(olives$Region,olives[,1:8]) bagg.default Predictions from TWIX s or Bagging Trees Prediction of a new observation based on multiple trees. ## Default S3 method: bagg(object,data=null,sq=1:10,...) ## S3 method for class 'TWIX': bagg(object,...) ## S3 method for class 'boottwix': bagg(object,...) object data sq object of classes TWIX or boottwix. a data frame of new observations. TWIX, predict.twix boottwix Integer vector indicating for which trees predictions are required. #Tree <- TWIX(Region~.,data=olives,topN=c(5,3),method="local") #Tree1 <- boottwix(region~.,data=olives,topn=c(3,1),n=10) #pred <- bagg(tree,olives,sq=1:10) #pred1 <- bagg(tree1,olives,sq=1:10) # #CCR's #sum(pred==olives$region)/nrow(olives) #sum(pred1==olives$region)/nrow(olives)

4 4 boottwix boottwix Bootstrap of the TWIX trees Bootstrap samples of the Greedy-TWIX-trees. boottwix(formula, data=null,test.data=0,n=1,topn=1,subset=null, method="deviance",topn.method="complete", cluster=null,minsplit=30,minbucket=round(minsplit/3), Devmin=0.1,level=20,score=1,tol=0.15) formula formula of the form y ~ x1 + x2 +..., where y must be a factor and x1,x2,... are numeric. data an optional data frame containing the variables in the model (training data). test.data a data frame containing new data. N an integer giving the number of bootstrap replications. topn integer vector. How many splits will be selected and at which level? If length 1, the same size of splits will be selected at each level. If length > 1, for example topn=c(3,2), 3 splits will be chosen at first level, 2 splits at second level and for all next levels 1 split. subset an optional vector specifying a subset of observations to be used. method Which split points will be used? This can be "deviance" (default), "grid" or "local". If the method is set to: "local" - the program uses the local maxima of the split function(entropy), "deviance" - all values of the entropy, "grid" - grid points. topn.method one of "complete"(default) or "single". A specification of the consideration of the split points. If set to "complete" it uses split points from all variables, else it uses split points per variable. cluster name of the cluster, if parallel computing will be used. minsplit the minimum number of observations that must exist in a node. minbucket the minimum number of observations in any terminal <leaf> node. Devmin the minimum improvement on entropy by splitting. level maximum depth of the trees. If level set to 1, trees consist of root node. score a parameter, which can be 1(default) or 2. If it is 2 the sort-function will be used, if it set to 1 weigth-function will be used score = 0.25*scale(dev.tr)+0.6*scale(fit.tr)+0.15*(tree.structure) tol parameter, which will be used, if topn.method is set to "single".

5 deviance.twix 5 Value a list with the following components : call trees the call generating the object. a list of all constructed trees, which include ID, Dev... for each tree. get.tree, predict.twix, deviance.twix,bagg.twix, #Tree <- boottwix(region~.,data=olives,n=5) #Tree$trees deviance.twix Deviance of the TWIX-trees Returns the deviance of a fitted model object. ## S3 method for class 'TWIX': deviance(object,type="training",...) Value object type an object of class TWIX or boottwix for which the deviance is desired. a deviance from test, training or from both data.... additional optional argument. The value of the deviance extracted from the object object. TWIX, predict.twix #Tree <- TWIX(Region~.,data=olives,topN=c(5,3),method="local") #deviance(tree)

6 6 fullrks export Export TWIX-trees for KLIMT Export TWIX-trees for KLIMT. export(x, sq = 1, directory = "ForKlimt") x sq directory an object of class TWIX. a vector of tree IDs. a name of directory, which will be created. predict.twix,print.twix #Tree<-TWIX(Region~.,data=olives,topN=c(3,2),method="local",level=5) #export(tree,c(1,3)) fullrks Internal functions of TWIX. Internal functions of TWIX. fullrks(m) m a numeric or character vector fullrks(c(3,1,2,4,5)) a<-as.factor(c("yes","yes","no","no")) fullrks(a)

7 get.splitvar 7 get.splitvar Methods for getting split variables from a single tree(single.tree). This function extracts a split variables from a single.tree object. get.splitvar(x, sq = 1:length(x$trees),parm = "Splitvar") x sq parm an object of class TWIX generated by TWIX Integer vector indicating for which trees a split variables are required. Which information must be returned? This can be "Splitvar" or "Dev" TWIX,get.tree, predict.twix #TreeFR<-TWIX(Area~.,data=olives,topN=c(2,1),method="local",level=5) #TreeSG<-get.splitvar(TreeFR,sq=1) #TreeSG #get.tree(treefr) get.tree Methods for getting appointed trees from a multitree(twix). This function extracts a single tree from a TWIX object. get.tree(m.tr, n = 1, id = NULL) m.tr n id an object of class TWIX generated by TWIX integer. Which tree must be extracted? a vector of integers. ID of the tree.

8 8 idevplot TWIX, predict.twix #TreeFR<-TWIX(Area~.,data=olives,topN=c(2,1),method="local") #TreeSG<-get.tree(TreeFR,n=1) #TreeSG idevplot Interactive Deviance plot Interactive Deviance plot. idevplot(rsp, data,col=1,...) rsp response variable. data a dataframe of numeric predictor variables. col the color for points.... other parameters to be passed through to plotting functions. Devplot,plot.TWIX,TWIX #idevplot(olives$region,olives[,c(1:8)])

9 nlogn 9 nlogn Internal functions of TWIX. This function compute 0*log(0). nlogn(x) x a numeric value log(0) nlogn(0) olives Classification of olive oils from their fatty acid composition. The olives data frame has 572 rows and 10 columns. olives Format This data frame contains the following columns: Area a factor with 3 levels. Region a factor with 9 levels. and 8 variables of fatty acid measurements. Source Forina, M. and Armanino, C. and Lanteri, S. and Tiscornia, E. (1983) Food Research and Data Analysis, Applied Science Publishers, CA 1983.

10 10 plot.twix plot.twix Plotting method for TWIX Objects Plot an TWIX or boottwix object generated by TWIX or boottwix function(s). ## S3 method for class 'TWIX': plot(x,sq = 1:length(x$trees),type = "deviance", i.plot = FALSE,size = 3,freq = TRUE,breaks = "Sturges",pch = par("pch") x sq type i.plot size freq breaks pch an object of class TWIX. Integer vector giving the number of trees to be plotted. one of "deviance", "ccr","d&c". logical. If TRUE, iplot will be used. value for largest circle (cex). logical. Should the frequence or density be plotted. see histogram. a vector of plotting characters or symbols.... graphical parameters can be given as arguments to plot. Many methods will also accept the following arguments: Details If type = "deviance": the training deviance vs. test deviance will be plotted. If type = "ccr": the correct classification rate(ccr) for training data vs. the CCR for test data. If type = "d&c": the deviance vs. CCR for test data. TWIX get.tree

11 pred.value 11 i <- sample(572,150) ic <- setdiff(1:572,i) training <- olives[ic,] valid <- olives[i,] # #Tree<-TWIX(Region~.,training,test.data=valid,topN=c(10,2),method="local") #plot(tree) #plot(tree,type="ccr") #plot(tree,type="d&c") #plot(tree,i.plot=true) pred.value Prediction for one case Internal prediction functions. pred.value(x, tree) x tree the case from data.frame an object of class single.tree predict.twix predict.twix Predictions from a TWIX Object The result is a data frame, whose rows are prediction values from appointed tree(s). ## S3 method for class 'TWIX': predict(object,newdata,sq=1,ccr=false,...)

12 12 print.id.tree object newdata sq ccr an object returned from TWIX function. data frame containing the new data(test data). Integer vector indicating for which trees predictions are required. logical. If TRUE the result is a list of two components: a data frame with prediction values and correct classification rate of trees. This parameter can be ignored, if the function TWIX has been called call with test data (test.data=test).... additional arguments affecting the predictions produced TWIX, plot.twix i <- sample(572,150) ic <- setdiff(1:572,i) training <- olives[ic,] test <- olives[i,] # #Tree<-TWIX(Region~.,training,topN=c(5,2),method="local") #pred<-predict(tree,newdata=test,sq=1:2) # #predict(tree,newdata=test,sq=1:5,ccr=true)$ccr print.id.tree Internal functions of TWIX. Print names from id.tree Object. ## S3 method for class 'id.tree': print(x, sq = 1:5,...) x an object of class id.tree. sq a numeric vector.... further arguments passed to or from other methods. TWIX

13 print.single.tree 13 print.single.tree Print tree from single.tree Object. This is a method for the generic print() function for objects generating by the function get.tree. ## S3 method for class 'single.tree': print(x, klimt=false, Data=NULL, file="fromr.tree",...) x klimt Data file an object of class single.tree. logical. If TRUE, Klimt will be started with the tree baum und dataset Data. a data frame. It can be test data or training data. This parameter is ignored if klimt == "FALSE". a character string naming a file.... further arguments passed to or from other methods. get.tree,twix #Tree<-TWIX(Area~.,data=olives,topN=c(2,2),method="local") #Tree1<-get.tree(Tree,n=1) #Tree1 #for Klimt #print(tree1,klimt=true,data=olives) print.twix Print Method for TWIX or BootTWIXtree Object Print object of class TWIX or boottwix. ## S3 method for class 'TWIX': print(x,...) ## S3 method for class 'boottwix': print(x,...)

14 14 scree.plot x Details object of class TWIX or boottwix.... additional arguments. An object of class TWIX or boottwix is printed. Information about names of the intermediate variables is given. TWIX, print.single.tree scree.plot Scree-plot A scree plot shows the sorted maximum decrease in impurity for each variable s value. scree.plot(formula, data = NULL, bars = TRUE, col = "grey", type = "b", pch = 16, ylim = c(0, 1),...) formula data bars col type formula of the form y ~ x1 + x2 +..., where y must be a factor and x1,x2,... are numeric or factor. an optional data frame containing the variables in the model. the type of plot: barplot or lines. The colors for lines or Bar s. the color for points. pch see par(pch =...). ylim the y limits of the plot.... other parameters to be passed through to plotting functions. idevplot,plot.twix,twix scree.plot(region~.,data=olives,bars=false,col=2) scree.plot(region~.,data=olives,bars=true)

15 sp.slave 15 sp.slave Internal functions of TWIX. Trees with extra splits sp.slave(rsp, m, test.data, Dmin = 0.01, minsplit = 20, minbucket=round(minsplit/3),topn = 1, method = "deviance", topn.method = "complete", level = 3, lev = 0, st = 1, tol = 0.1, K = 0, oldspvar=0) rsp m test.data method topn.method minsplit minbucket Dmin topn level st tol lev K oldspvar a response variable. a dataframe (training data). a data frame containing new data. Which split points will be used? This can be "deviance" (default), "grid" or "local". If the method is set to: local the program uses the local maxima of the split function(entropy), deviance all values of the entropy, grid grid points. one of "complete"(default) or "single". A specification of the consideration of the split points. If set to "complete" it uses split points from all variables, else it uses split points per variable. the minimum number of observations that must exist in a node. the minimum number of observations in any terminal <leaf> node. the minimum improvement on entropy by splitting. a integer vector. How many splits will be selected and at which level? If length 1, the same size of splits will be selected at each level. If length > 1, for example topn=c(3,2), 3 splits will be chosen at first level, 2 splits at second level and for all next levels 1 split. maximum depth of the trees. If level set to 1, trees consist of root node. step parameter for method "grid". parameter, which will be used, if topn.method is set to "single". Internal parameter. k-fold cross-validation. internal parameter. TWIX, predict.twix,

16 16 summary.twix splitt Internal functions of TWIX. This function compute deviance and split-points. splitt(sv, rsp, svrks = fullrks(sv), meth = "deviance", topn = 1, topn.meth = "complete", lstep = 1, test = FALSE, K = 0, level = 0) sv rsp svrks meth topn topn.meth lstep test K level a numeric vector of predicted variable. response variable. a index vector of sv by sorting. Which split points will be used? This can be "deviance" (default), "grid" or "local". a numeric vector. How many splits will be selected and at which level? one of "complete"(default) or "single". step parameter for method "grid". parameter for Devplot. k-fold cross-validation. Set the maximum depth of the TWIX tree s summary.twix Summarising TWIX summary method for objects returned by TWIX or boottwix. ## S3 method for class 'TWIX': summary(object,...) ## S3 method for class 'boottwix': summary(object,...)

17 tunetwix 17 object object of class TWIX or boottwix.... further arguments to be passed to or from methods. TWIX tunetwix Parameter Tuning. This function tunes hyperparameters minbuck and maxdepth. tunetwix(formula, data = NULL, minbuck = seq(5, 30, by = 5), xval = 10, runs = 10, trace.plot = TRUE) formula data minbuck xval runs trace.plot formula of the form y ~ x1 + x2 +..., where y must be a factor and x1,x2,... are numeric or factor. an optional data frame containing the variables in the model. the sampling space for parameter minbuck. number of cross-validations. number of runs. Should trace plot be ploted?... other parameters to be passed through to plotting functions. plot.twix,twix tunetwix(region~.,data=olives[,1:9],minbuck=c(1,5,10,15,20,25),runs=2)

18 18 TWIX TWIX Trees with extra splits Trees with extra splits TWIX(formula, data = NULL, test.data = 0, subset = NULL, method = "deviance", topn.method = "complete", cluster = NULL, minsplit = 30, minbucket = round(minsplit/3), Devmin = 0.05, topn = 1, level = 30, st = 1, cl.level = 2, tol = 0.15, score = 1, k = 0, trace.plot=false,...) formula data test.data subset method topn.method cluster minsplit minbucket Devmin formula of the form y ~ x1 + x2 +..., where y must be a factor and x1,x2,... are numeric or factor. an optional data frame containing the variables in the model(training data). This can be a data frame containing new data, 0(default), or "NULL".If set to "NULL" the bad obserations will be specified. an optional vector specifying a subset of observations to be used. Which split points will be used? This can be "deviance" (default), "grid" or "local". If the method is set to: "local" - the program uses the local maxima of the split function(entropy), "deviance" - all values of the entropy, "grid" - grid points. one of "complete"(default) or "single". A specification of the consideration of the split points. If set to "complete" it uses split points from all variables, else it uses split points per variable. name of the cluster, if parallel computing will be used. the minimum number of observations that must exist in a node. the minimum number of observations in any terminal <leaf> node. the minimum improvement on entropy by splitting. topn integer vector. How many splits will be selected and at which level? If length 1, the same size of splits will be selected at each level. If length > 1, for example topn=c(3,2), 3 splits will be chosen at first level, 2 splits at second level and for all next levels 1 split. level st cl.level tol maximum depth of the trees. If level set to 1, trees consist of root node. step parameter for method "grid". parameter for parallel computing. parameter, which will be used, if topn.method is set to "single".

19 TWIX 19 score a parameter, which can be 1(default) or 2. If it is 2 the sort-function will be used, if it set to 1 weigth-function will be used score = 0.25*scale(dev.tr)+0.6*scale(fit.tr)+0.15*(tree.structure) k Value trace.plot k-fold cross-validation of split-function. k specify the part of observations which will be take in hold-out sample (k can be (0,0.5)). Should trace plot be ploted?... further arguments to be passed to or from methods. a list with the following components : call trees greedy.tree multitree agg.id Bad.id the call generating the object. a list of all constructed trees, which include ID, Dev... for each tree. greedy tree database vector specifying trees for aggregation. ID-vector of bad observations from train data. get.tree, predict.twix, print.single.tree, plot.twix, deviance.twix i <- sample(572,150) ic <- setdiff(1:572,i) training <- olives[ic,] test <- olives[i,] # #Tree1<-TWIX(Region~.,data=training[,1:9],topN=c(9,2),method="local") #Tree1$trees # #pred<-predict(tree1,newdata=test,sq=1:2) # #predict(tree1,newdata=test,sq=1:2,ccr=true)$ccr

20 Index Topic datasets olives, 9 Topic internal Dev.leaf, 1 Topic tree bagg.default, 2 boottwix, 3 deviance.twix, 5 Devplot, 2 export, 5 fullrks, 6 get.splitvar, 7 get.tree, 7 idevplot, 8 nlogn, 9 plot.twix, 10 pred.value, 11 predict.twix, 11 print.id.tree, 12 print.single.tree, 13 print.twix, 13 scree.plot, 14 sp.slave, 15 splitt, 16 summary.twix, 16 tunetwix, 17 TWIX, 18 bagg (bagg.default), 2 bagg.default, 2 bagg.twix, 4 boottwix, 3, 3 fullrks, 6 get.splitvar, 7 get.tree, 4, 7, 7, 10, 13, 19 idevplot, 2, 8, 14 nlogn, 9 olives, 9 plot.boottwix (plot.twix), 10 plot.twix, 2, 8, 10, 12, 14, 17, 19 pred.value, 11 predict.boottwix (predict.twix), 11 predict.twix, 3 8, 11, 11, 15, 19 print.boottwix (print.twix), 13 print.id.tree, 12 print.single.tree, 13, 14, 19 print.twix, 6, 13 scree.plot, 14 sp.slave, 15 splitt, 16 summary.boottwix (summary.twix), 16 summary.twix, 16 tunetwix, 17 TWIX, 2, 3, 5, 7, 8, 10, 12 17, 18 Dev.leaf, 1 deviance.boottwix (deviance.twix), 5 deviance.twix, 4, 5, 19 Devplot, 2, 8 export, 5 20

Package TWIX. R topics documented: February 15, Title Trees WIth extra splits. Date Version

Package TWIX. R topics documented: February 15, Title Trees WIth extra splits. Date Version Package TWIX February 15, 2013 Title Trees WIth extra splits Date 28.03.2012 Version 0.2.19 Author Sergej Potapov , Martin Theus Maintainer Sergej Potapov

More information

CARTWARE Documentation

CARTWARE Documentation CARTWARE Documentation CARTWARE is a collection of R functions written for Classification and Regression Tree (CART) Analysis of ecological data sets. All of these functions make use of existing R functions

More information

Decision Tree Structure: Root

Decision Tree Structure: Root Decision Trees Decision Trees Data is partitioned based on feature values, repeatedly When partitioning halts, sets of observations are categorized by label Classification is achieved by walking the tree

More information

Erin LeDell Instructor

Erin LeDell Instructor MACHINE LEARNING WITH TREE-BASED MODELS IN R Introduction to regression trees Erin LeDell Instructor Train a Regresion Tree in R > rpart(formula =, data =, method = ) Train/Validation/Test Split training

More information

Package Daim. February 15, 2013

Package Daim. February 15, 2013 Package Daim February 15, 2013 Version 1.0.0 Title Diagnostic accuracy of classification models. Author Sergej Potapov, Werner Adler and Berthold Lausen. Several functions for evaluating the accuracy of

More information

The gbev Package. August 19, 2006

The gbev Package. August 19, 2006 The gbev Package August 19, 2006 Version 0.1 Date 2006-08-10 Title Gradient Boosted Regression Trees with Errors-in-Variables Author Joe Sexton Maintainer Joe Sexton

More information

Decision trees. For this lab, we will use the Carseats data set from the ISLR package. (install and) load the package with the data set

Decision trees. For this lab, we will use the Carseats data set from the ISLR package. (install and) load the package with the data set Decision trees For this lab, we will use the Carseats data set from the ISLR package. (install and) load the package with the data set # install.packages('islr') library(islr) Carseats is a simulated data

More information

Package maboost. R topics documented: February 20, 2015

Package maboost. R topics documented: February 20, 2015 Version 1.0-0 Date 2014-11-01 Title Binary and Multiclass Boosting Algorithms Author Tofigh Naghibi Depends R(>= 2.10),rpart,C50 Package maboost February 20, 2015 Performs binary and multiclass boosting

More information

Lecture 25: Review I

Lecture 25: Review I Lecture 25: Review I Reading: Up to chapter 5 in ISLR. STATS 202: Data mining and analysis Jonathan Taylor 1 / 18 Unsupervised learning In unsupervised learning, all the variables are on equal standing,

More information

Tree-based methods for classification and regression

Tree-based methods for classification and regression Tree-based methods for classification and regression Ryan Tibshirani Data Mining: 36-462/36-662 April 11 2013 Optional reading: ISL 8.1, ESL 9.2 1 Tree-based methods Tree-based based methods for predicting

More information

RandomForest Documentation

RandomForest Documentation RandomForest Documentation Release 0.0.10 Henrik Boström Sep 08, 2017 Contents 1 RandomForest v. 0.0.10 3 2 To install the package 5 3 Functions for running experiments 7 3.1 apply_model...............................................

More information

Package gbts. February 27, 2017

Package gbts. February 27, 2017 Type Package Package gbts February 27, 2017 Title Hyperparameter Search for Gradient Boosted Trees Version 1.2.0 Date 2017-02-26 Author Waley W. J. Liang Maintainer Waley W. J. Liang

More information

Package caretensemble

Package caretensemble Package caretensemble Type Package Title Ensembles of Caret Models Version 2.0.0 Date 2016-02-06 August 29, 2016 URL https://github.com/zachmayer/caretensemble BugReports https://github.com/zachmayer/caretensemble/issues

More information

Package arulescba. April 24, 2018

Package arulescba. April 24, 2018 Version 1.1.3-1 Date 2018-04-23 Title Classification Based on Association Rules Package arulescba April 24, 2018 Provides a function to build an association rulebased classifier for data frames, and to

More information

STENO Introductory R-Workshop: Loading a Data Set Tommi Suvitaival, Steno Diabetes Center June 11, 2015

STENO Introductory R-Workshop: Loading a Data Set Tommi Suvitaival, Steno Diabetes Center June 11, 2015 STENO Introductory R-Workshop: Loading a Data Set Tommi Suvitaival, tsvv@steno.dk, Steno Diabetes Center June 11, 2015 Contents 1 Introduction 1 2 Recap: Variables 2 3 Data Containers 2 3.1 Vectors................................................

More information

Package rpst. June 6, 2017

Package rpst. June 6, 2017 Type Package Title Recursive Partitioning Survival Trees Version 1.0.0 Date 2017-6-6 Author Maintainer Package rpst June 6, 2017 An implementation of Recursive Partitioning Survival Trees

More information

Package hgam. February 20, 2015

Package hgam. February 20, 2015 Title High-dimensional Additive Modelling Version 0.1-2 Date 2013-05-13 Package hgam February 20, 2015 Author The students of the `Advanced R Programming Course' Hannah Frick, Ivan Kondofersky, Oliver

More information

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Midterm Examination CS540-2: Introduction to Artificial Intelligence Midterm Examination CS540-2: Introduction to Artificial Intelligence March 15, 2018 LAST NAME: FIRST NAME: Problem Score Max Score 1 12 2 13 3 9 4 11 5 8 6 13 7 9 8 16 9 9 Total 100 Question 1. [12] Search

More information

Lecture 06 Decision Trees I

Lecture 06 Decision Trees I Lecture 06 Decision Trees I 08 February 2016 Taylor B. Arnold Yale Statistics STAT 365/665 1/33 Problem Set #2 Posted Due February 19th Piazza site https://piazza.com/ 2/33 Last time we starting fitting

More information

Classification and Regression Trees

Classification and Regression Trees Classification and Regression Trees Matthew S. Shotwell, Ph.D. Department of Biostatistics Vanderbilt University School of Medicine Nashville, TN, USA March 16, 2018 Introduction trees partition feature

More information

Package ICEbox. July 13, 2017

Package ICEbox. July 13, 2017 Type Package Package ICEbox July 13, 2017 Title Individual Conditional Expectation Plot Toolbox Version 1.1.2 Date 2017-07-12 Author Alex Goldstein, Adam Kapelner, Justin Bleich Maintainer Adam Kapelner

More information

Package logicfs. R topics documented:

Package logicfs. R topics documented: Package logicfs November 21, 2017 Title Identification of SNP Interactions Version 1.48.0 Date 2013-09-12 Author Holger Schwender Maintainer Holger Schwender Depends LogicReg, mcbiopi

More information

Logical operators: R provides an extensive list of logical operators. These include

Logical operators: R provides an extensive list of logical operators. These include meat.r: Explanation of code Goals of code: Analyzing a subset of data Creating data frames with specified X values Calculating confidence and prediction intervals Lists and matrices Only printing a few

More information

Supplementary Material

Supplementary Material Supplementary Material Figure 1S: Scree plot of the 400 dimensional data. The Figure shows the 20 largest eigenvalues of the (normalized) correlation matrix sorted in decreasing order; the insert shows

More information

Lecture 19: Decision trees

Lecture 19: Decision trees Lecture 19: Decision trees Reading: Section 8.1 STATS 202: Data mining and analysis November 10, 2017 1 / 17 Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2

More information

8: Tree-based regression

8: Tree-based regression 8: Tree-based regression John H Maindonald June 18, 2018 Ideas and issues illustrated by the graphs in this vignette The fitting of a tree proceeds by making a succession of splits on the x-variable or

More information

Package subsemble. February 20, 2015

Package subsemble. February 20, 2015 Type Package Package subsemble February 20, 2015 Title An Ensemble Method for Combining Subset-Specific Algorithm Fits Version 0.0.9 Date 2014-07-01 Author Erin LeDell, Stephanie Sapp, Mark van der Laan

More information

Classification with Decision Tree Induction

Classification with Decision Tree Induction Classification with Decision Tree Induction This algorithm makes Classification Decision for a test sample with the help of tree like structure (Similar to Binary Tree OR k-ary tree) Nodes in the tree

More information

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany Syllabus Fri. 27.10. (1) 0. Introduction A. Supervised Learning: Linear Models & Fundamentals Fri. 3.11. (2) A.1 Linear Regression Fri. 10.11. (3) A.2 Linear Classification Fri. 17.11. (4) A.3 Regularization

More information

Stat 342 Exam 3 Fall 2014

Stat 342 Exam 3 Fall 2014 Stat 34 Exam 3 Fall 04 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed There are questions on the following 6 pages. Do as many of them as you can

More information

Package qrfactor. February 20, 2015

Package qrfactor. February 20, 2015 Type Package Package qrfactor February 20, 2015 Title Simultaneous simulation of Q and R mode factor analyses with Spatial data Version 1.4 Date 2014-01-02 Author George Owusu Maintainer

More information

Classification and Regression Trees

Classification and Regression Trees Classification and Regression Trees David S. Rosenberg New York University April 3, 2018 David S. Rosenberg (New York University) DS-GA 1003 / CSCI-GA 2567 April 3, 2018 1 / 51 Contents 1 Trees 2 Regression

More information

Package complmrob. October 18, 2015

Package complmrob. October 18, 2015 Type Package Package complmrob October 18, 2015 Title Robust Linear Regression with Compositional Data as Covariates Version 0.6.1 Date 2015-10-17 Author David Kepplinger Maintainer

More information

Lecture 27: Review. Reading: All chapters in ISLR. STATS 202: Data mining and analysis. December 6, 2017

Lecture 27: Review. Reading: All chapters in ISLR. STATS 202: Data mining and analysis. December 6, 2017 Lecture 27: Review Reading: All chapters in ISLR. STATS 202: Data mining and analysis December 6, 2017 1 / 16 Final exam: Announcements Tuesday, December 12, 8:30-11:30 am, in the following rooms: Last

More information

The supclust Package

The supclust Package The supclust Package May 18, 2005 Title Supervised Clustering of Genes Version 1.0-5 Date 2005-05-18 Methodology for Supervised Grouping of Predictor Variables Author Marcel Dettling and Martin Maechler

More information

Package svalues. July 15, 2018

Package svalues. July 15, 2018 Type Package Package svalues July 15, 2018 Title Measures of the Sturdiness of Regression Coefficients Version 0.1.6 Author Carlos Cinelli Maintainer Carlos Cinelli Implements

More information

Package kdetrees. February 20, 2015

Package kdetrees. February 20, 2015 Type Package Package kdetrees February 20, 2015 Title Nonparametric method for identifying discordant phylogenetic trees Version 0.1.5 Date 2014-05-21 Author and Ruriko Yoshida Maintainer

More information

Machine Learning: An Applied Econometric Approach Online Appendix

Machine Learning: An Applied Econometric Approach Online Appendix Machine Learning: An Applied Econometric Approach Online Appendix Sendhil Mullainathan mullain@fas.harvard.edu Jann Spiess jspiess@fas.harvard.edu April 2017 A How We Predict In this section, we detail

More information

Lecture 20: Bagging, Random Forests, Boosting

Lecture 20: Bagging, Random Forests, Boosting Lecture 20: Bagging, Random Forests, Boosting Reading: Chapter 8 STATS 202: Data mining and analysis November 13, 2017 1 / 17 Classification and Regression trees, in a nut shell Grow the tree by recursively

More information

WEKA homepage.

WEKA homepage. WEKA homepage http://www.cs.waikato.ac.nz/ml/weka/ Data mining software written in Java (distributed under the GNU Public License). Used for research, education, and applications. Comprehensive set of

More information

Package glmnetutils. August 1, 2017

Package glmnetutils. August 1, 2017 Type Package Version 1.1 Title Utilities for 'Glmnet' Package glmnetutils August 1, 2017 Description Provides a formula interface for the 'glmnet' package for elasticnet regression, a method for cross-validating

More information

Package fbroc. August 29, 2016

Package fbroc. August 29, 2016 Type Package Package fbroc August 29, 2016 Title Fast Algorithms to Bootstrap Receiver Operating Characteristics Curves Version 0.4.0 Date 2016-06-21 Implements a very fast C++ algorithm to quickly bootstrap

More information

User Documentation Decision Tree Classification with Bagging

User Documentation Decision Tree Classification with Bagging User Documentation Decision Tree Classification with Bagging A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute, each branch represents the outcome

More information

Machine Learning. bad news. Big Picture: Supervised Learning. Supervised (Function) Learning. Learning From Data with Decision Trees.

Machine Learning. bad news. Big Picture: Supervised Learning. Supervised (Function) Learning. Learning From Data with Decision Trees. bad news Machine Learning Learning From Data with Decision Trees Supervised (Function) Learning y = F(x 1 x n ): true function (usually not known) D: training sample drawn from F(x) 57,M,195,0,125,95,39,25,0,1,0,0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0

More information

The simpleboot Package

The simpleboot Package The simpleboot Package April 1, 2005 Version 1.1-1 Date 2005-03-31 LazyLoad yes Depends R (>= 2.0.0), boot Title Simple Bootstrap Routines Author Maintainer Simple bootstrap

More information

Package preprocomb. June 26, 2016

Package preprocomb. June 26, 2016 Type Package Title Tools for Preprocessing Combinations Version 0.3.0 Date 2016-6-26 Author Markus Vattulainen Package preprocomb June 26, 2016 Maintainer Markus Vattulainen

More information

Machine Learning. A. Supervised Learning A.7. Decision Trees. Lars Schmidt-Thieme

Machine Learning. A. Supervised Learning A.7. Decision Trees. Lars Schmidt-Thieme Machine Learning A. Supervised Learning A.7. Decision Trees Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim, Germany 1 /

More information

Package signalhsmm. August 29, 2016

Package signalhsmm. August 29, 2016 Type Package Title Predict Presence of Signal Peptides Version 1.4 LazyData true Date 2016-03-03 Package signalhsmm August 29, 2016 Predicts the presence of signal peptides in eukaryotic protein using

More information

ACHIEVEMENTS FROM TRAINING

ACHIEVEMENTS FROM TRAINING LEARN WELL TECHNOCRAFT DATA SCIENCE/ MACHINE LEARNING SYLLABUS 8TH YEAR OF ACCOMPLISHMENTS AUTHORIZED GLOBAL CERTIFICATION CENTER FOR MICROSOFT, ORACLE, IBM, AWS AND MANY MORE. 8411002339/7709292162 WWW.DW-LEARNWELL.COM

More information

The grplasso Package

The grplasso Package The grplasso Package June 27, 2007 Type Package Title Fitting user specified models with Group Lasso penalty Version 0.2-1 Date 2007-06-27 Author Lukas Meier Maintainer Lukas Meier

More information

Computer Vision Group Prof. Daniel Cremers. 8. Boosting and Bagging

Computer Vision Group Prof. Daniel Cremers. 8. Boosting and Bagging Prof. Daniel Cremers 8. Boosting and Bagging Repetition: Regression We start with a set of basis functions (x) =( 0 (x), 1(x),..., M 1(x)) x 2 í d The goal is to fit a model into the data y(x, w) =w T

More information

Package flam. April 6, 2018

Package flam. April 6, 2018 Type Package Package flam April 6, 2018 Title Fits Piecewise Constant Models with Data-Adaptive Knots Version 3.2 Date 2018-04-05 Author Ashley Petersen Maintainer Ashley Petersen

More information

Package Rramas. November 25, 2017

Package Rramas. November 25, 2017 Package Rramas November 25, 2017 Type Package Title Matrix Population Models Version 0.1-5 Date 2017-11-24 Depends diagram Author Marcelino de la Cruz Maintainer Marcelino de la Cruz

More information

Classification. Data Set Iris. Logistic Regression. Species. Petal.Width

Classification. Data Set Iris. Logistic Regression. Species. Petal.Width Classification Data Set Iris # load data data(iris) # this is what it looks like... head(iris) Sepal.Length Sepal.Width 1 5.1 3.5 1.4 0.2 2 4.9 3.0 1.4 0.2 3 4.7 3.2 1.3 0.2 4 4.6 3.1 0.2 5 5.0 3.6 1.4

More information

Applying Improved Random Forest Explainability (RFEX 2.0) steps on synthetic data for variable features having a unimodal distribution

Applying Improved Random Forest Explainability (RFEX 2.0) steps on synthetic data for variable features having a unimodal distribution Applying Improved Random Forest Explainability (RFEX 2.0) steps on synthetic data for variable features having a unimodal distribution 1. Introduction Sabiha Barlaskar, Dragutin Petkovic SFSU CS Department

More information

Package xpose4specific

Package xpose4specific Title Xpose 4 Specific Functions Package Version 4.4.0 Date 2012-10-17 Package xpose4specific February 15, 2013 Author E. Niclas Jonsson , Andrew Hooker

More information

Business Club. Decision Trees

Business Club. Decision Trees Business Club Decision Trees Business Club Analytics Team December 2017 Index 1. Motivation- A Case Study 2. The Trees a. What is a decision tree b. Representation 3. Regression v/s Classification 4. Building

More information

Package munfold. R topics documented: February 8, Type Package. Title Metric Unfolding. Version Date Author Martin Elff

Package munfold. R topics documented: February 8, Type Package. Title Metric Unfolding. Version Date Author Martin Elff Package munfold February 8, 2016 Type Package Title Metric Unfolding Version 0.3.5 Date 2016-02-08 Author Martin Elff Maintainer Martin Elff Description Multidimensional unfolding using

More information

Data Preprocessing. Slides by: Shree Jaswal

Data Preprocessing. Slides by: Shree Jaswal Data Preprocessing Slides by: Shree Jaswal Topics to be covered Why Preprocessing? Data Cleaning; Data Integration; Data Reduction: Attribute subset selection, Histograms, Clustering and Sampling; Data

More information

Package randomforest.ddr

Package randomforest.ddr Package randomforest.ddr March 10, 2017 Type Package Title Distributed 'randomforest' for Big Data using 'ddr' API Version 0.1.2 Date 2017-03-09 Author Vishrut Gupta, Arash Fard, Winston Li, Matthew Saltz

More information

Statistics 251: Statistical Methods

Statistics 251: Statistical Methods Statistics 251: Statistical Methods Summaries and Graphs in R Module R1 2018 file:///u:/documents/classes/lectures/251301/renae/markdown/master%20versions/summary_graphs.html#1 1/14 Summary Statistics

More information

7. Boosting and Bagging Bagging

7. Boosting and Bagging Bagging Group Prof. Daniel Cremers 7. Boosting and Bagging Bagging Bagging So far: Boosting as an ensemble learning method, i.e.: a combination of (weak) learners A different way to combine classifiers is known

More information

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall Midterm Exam Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall Covers topics through Decision Trees and Random Forests (does not include constraint satisfaction) Closed book 8.5 x 11 sheet with notes

More information

h=[3,2,5,7], pos=[2,1], neg=[4,4]

h=[3,2,5,7], pos=[2,1], neg=[4,4] 2D1431 Machine Learning Lab 1: Concept Learning & Decision Trees Frank Hoffmann e-mail: hoffmann@nada.kth.se November 8, 2002 1 Introduction You have to prepare the solutions to the lab assignments prior

More information

The Basics of Decision Trees

The Basics of Decision Trees Tree-based Methods Here we describe tree-based methods for regression and classification. These involve stratifying or segmenting the predictor space into a number of simple regions. Since the set of splitting

More information

Package gains. September 12, 2017

Package gains. September 12, 2017 Package gains September 12, 2017 Version 1.2 Date 2017-09-08 Title Lift (Gains) Tables and Charts Author Craig A. Rolling Maintainer Craig A. Rolling Depends

More information

1 Document Classification [60 points]

1 Document Classification [60 points] CIS519: Applied Machine Learning Spring 2018 Homework 4 Handed Out: April 3 rd, 2018 Due: April 14 th, 2018, 11:59 PM 1 Document Classification [60 points] In this problem, you will implement several text

More information

Package Cubist. December 2, 2017

Package Cubist. December 2, 2017 Type Package Package Cubist December 2, 2017 Title Rule- And Instance-Based Regression Modeling Version 0.2.1 Date 2017-12-01 Maintainer Max Kuhn Description Regression modeling using

More information

Package PTE. October 10, 2017

Package PTE. October 10, 2017 Type Package Title Personalized Treatment Evaluator Version 1.6 Date 2017-10-9 Package PTE October 10, 2017 Author Adam Kapelner, Alina Levine & Justin Bleich Maintainer Adam Kapelner

More information

Lecture 7: Decision Trees

Lecture 7: Decision Trees Lecture 7: Decision Trees Instructor: Outline 1 Geometric Perspective of Classification 2 Decision Trees Geometric Perspective of Classification Perspective of Classification Algorithmic Geometric Probabilistic...

More information

Basics of Plotting Data

Basics of Plotting Data Basics of Plotting Data Luke Chang Last Revised July 16, 2010 One of the strengths of R over other statistical analysis packages is its ability to easily render high quality graphs. R uses vector based

More information

Package sciplot. February 15, 2013

Package sciplot. February 15, 2013 Package sciplot February 15, 2013 Version 1.1-0 Title Scientific Graphing Functions for Factorial Designs Author Manuel Morales , with code developed by the R Development Core Team

More information

Package mlegp. April 15, 2018

Package mlegp. April 15, 2018 Type Package Package mlegp April 15, 2018 Title Maximum Likelihood Estimates of Gaussian Processes Version 3.1.7 Date 2018-01-29 Author Garrett M. Dancik Maintainer Garrett M. Dancik

More information

Package dalmatian. January 29, 2018

Package dalmatian. January 29, 2018 Package dalmatian January 29, 2018 Title Automating the Fitting of Double Linear Mixed Models in 'JAGS' Version 0.3.0 Date 2018-01-19 Automates fitting of double GLM in 'JAGS'. Includes automatic generation

More information

> z <- cart(sc.cele~k.sc.cele+veg2+elev+slope+aspect+sun+heat, data=envspk)

> z <- cart(sc.cele~k.sc.cele+veg2+elev+slope+aspect+sun+heat, data=envspk) Using Cartware in R Bradley W. Compton, Department of Environmental Conservation, University of Massachusetts, Amherst, MA 01003, bcompton@eco.umass.edu Revised: November 16, 2004 and November 30, 2005

More information

Package C50. December 1, 2017

Package C50. December 1, 2017 Type Package Title C5.0 Decision Trees and Rule-Based Models Version 0.1.1 Date 2017-11-20 Maintainer Max Kuhn Package C50 December 1, 2017 C5.0 decision trees and rulebased models for

More information

Oliver Dürr. Statistisches Data Mining (StDM) Woche 12. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

Oliver Dürr. Statistisches Data Mining (StDM) Woche 12. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften Statistisches Data Mining (StDM) Woche 12 Oliver Dürr Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften oliver.duerr@zhaw.ch Winterthur, 6 Dezember 2016 1 Multitasking

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 2014 Paper 321 A Scalable Supervised Subsemble Prediction Algorithm Stephanie Sapp Mark J. van der Laan

More information

Package crossword.r. January 19, 2018

Package crossword.r. January 19, 2018 Date 2018-01-13 Type Package Title Generating s from Word Lists Version 0.3.5 Author Peter Meissner Package crossword.r January 19, 2018 Maintainer Peter Meissner Generate crosswords

More information

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Predictive Analysis: Evaluation and Experimentation. Heejun Kim Predictive Analysis: Evaluation and Experimentation Heejun Kim June 19, 2018 Evaluation and Experimentation Evaluation Metrics Cross-Validation Significance Tests Evaluation Predictive analysis: training

More information

R practice. Eric Gilleland. 20th May 2015

R practice. Eric Gilleland. 20th May 2015 R practice Eric Gilleland 20th May 2015 1 Preliminaries 1. The data set RedRiverPortRoyalTN.dat can be obtained from http://www.ral.ucar.edu/staff/ericg. Read these data into R using the read.table function

More information

Package gppm. July 5, 2018

Package gppm. July 5, 2018 Version 0.2.0 Title Gaussian Process Panel Modeling Package gppm July 5, 2018 Provides an implementation of Gaussian process panel modeling (GPPM). GPPM is described in Karch (2016; )

More information

Package EnQuireR. R topics documented: February 19, Type Package Title A package dedicated to questionnaires Version 0.

Package EnQuireR. R topics documented: February 19, Type Package Title A package dedicated to questionnaires Version 0. Type Package Title A package dedicated to questionnaires Version 0.10 Date 2009-06-10 Package EnQuireR February 19, 2015 Author Fournier Gwenaelle, Cadoret Marine, Fournier Olivier, Le Poder Francois,

More information

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression... Supervised Learning Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression... Supervised Learning y=f(x): true function (usually not known) D: training

More information

Random Forest A. Fornaser

Random Forest A. Fornaser Random Forest A. Fornaser alberto.fornaser@unitn.it Sources Lecture 15: decision trees, information theory and random forests, Dr. Richard E. Turner Trees and Random Forests, Adele Cutler, Utah State University

More information

Applied Regression Modeling: A Business Approach

Applied Regression Modeling: A Business Approach i Applied Regression Modeling: A Business Approach Computer software help: SAS SAS (originally Statistical Analysis Software ) is a commercial statistical software package based on a powerful programming

More information

Package pandar. April 30, 2018

Package pandar. April 30, 2018 Title PANDA Algorithm Version 1.11.0 Package pandar April 30, 2018 Author Dan Schlauch, Joseph N. Paulson, Albert Young, John Quackenbush, Kimberly Glass Maintainer Joseph N. Paulson ,

More information

Package OLScurve. August 29, 2016

Package OLScurve. August 29, 2016 Type Package Title OLS growth curve trajectories Version 0.2.0 Date 2014-02-20 Package OLScurve August 29, 2016 Maintainer Provides tools for more easily organizing and plotting individual ordinary least

More information

Package enpls. May 14, 2018

Package enpls. May 14, 2018 Package enpls May 14, 2018 Type Package Title Ensemble Partial Least Squares Regression Version 6.0 Maintainer Nan Xiao An algorithmic framework for measuring feature importance, outlier detection,

More information

Nonparametric Approaches to Regression

Nonparametric Approaches to Regression Nonparametric Approaches to Regression In traditional nonparametric regression, we assume very little about the functional form of the mean response function. In particular, we assume the model where m(xi)

More information

Package arphit. March 28, 2019

Package arphit. March 28, 2019 Type Package Title RBA-style R Plots Version 0.3.1 Author Angus Moore Package arphit March 28, 2019 Maintainer Angus Moore Easily create RBA-style graphs

More information

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München Evaluation Measures Sebastian Pölsterl Computer Aided Medical Procedures Technische Universität München April 28, 2015 Outline 1 Classification 1. Confusion Matrix 2. Receiver operating characteristics

More information

Package queuecomputer

Package queuecomputer Package queuecomputer Title Computationally Efficient Queue Simulation Version 0.8.2 November 17, 2017 Implementation of a computationally efficient method for simulating queues with arbitrary arrival

More information

CART Bagging Trees Random Forests. Leo Breiman

CART Bagging Trees Random Forests. Leo Breiman CART Bagging Trees Random Forests Leo Breiman Breiman, L., J. Friedman, R. Olshen, and C. Stone, 1984: Classification and regression trees. Wadsworth Books, 358. Breiman, L., 1996: Bagging predictors.

More information

Quick introduction to descriptive statistics and graphs in. R Commander. Written by: Robin Beaumont

Quick introduction to descriptive statistics and graphs in. R Commander. Written by: Robin Beaumont Quick introduction to descriptive statistics and graphs in R Commander Written by: Robin Beaumont e-mail: robin@organplayers.co.uk http://www.robin-beaumont.co.uk/virtualclassroom/stats/course1.html Date

More information

Package vip. June 15, 2018

Package vip. June 15, 2018 Type Package Title Variable Importance Plots Version 0.1.0 Package vip June 15, 2018 A general framework for constructing variable importance plots from various types machine learning models in R. Aside

More information

Package mixphm. July 23, 2015

Package mixphm. July 23, 2015 Type Package Title Mixtures of Proportional Hazard Models Version 0.7-2 Date 2015-07-23 Package mixphm July 23, 2015 Fits multiple variable mixtures of various parametric proportional hazard models using

More information

Lecture 8: Grid Search and Model Validation Continued

Lecture 8: Grid Search and Model Validation Continued Lecture 8: Grid Search and Model Validation Continued Mat Kallada STAT2450 - Introduction to Data Mining with R Outline for Today Model Validation Grid Search Some Preliminary Notes Thank you for submitting

More information

Package uclaboot. June 18, 2003

Package uclaboot. June 18, 2003 Package uclaboot June 18, 2003 Version 0.1-3 Date 2003/6/18 Depends R (>= 1.7.0), boot, modreg Title Simple Bootstrap Routines for UCLA Statistics Author Maintainer

More information

Data Mining Lecture 8: Decision Trees

Data Mining Lecture 8: Decision Trees Data Mining Lecture 8: Decision Trees Jo Houghton ECS Southampton March 8, 2019 1 / 30 Decision Trees - Introduction A decision tree is like a flow chart. E. g. I need to buy a new car Can I afford it?

More information