PREPROCESSING THE FEATURE SELECTION ON MINING ALGORITHM - REVIEW

Similar documents
Published by: PIONEER RESEARCH & DEVELOPMENT GROUP ( 1

Statistical dependence measure for feature selection in microarray datasets

An Empirical Study on feature selection for Data Classification

ISSN ICIRET-2014

Filter methods for feature selection. A comparative study

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

BENCHMARKING ATTRIBUTE SELECTION TECHNIQUES FOR MICROARRAY DATA

Forward Feature Selection Using Residual Mutual Information

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Prognosis of Lung Cancer Using Data Mining Techniques

Feature Subset Selection Algorithms for Irrelevant Removal Using Minimum Spanning Tree Construction

CBFAST- Efficient Clustering Based Extended Fast Feature Subset Selection Algorithm for High Dimensional Data

Efficiently Handling Feature Redundancy in High-Dimensional Data

A Survey on Clustered Feature Selection Algorithms for High Dimensional Data

Redundancy Based Feature Selection for Microarray Data

REMOVAL OF REDUNDANT AND IRRELEVANT DATA FROM TRAINING DATASETS USING SPEEDY FEATURE SELECTION METHOD

Naïve Bayes for text classification

Domain Independent Prediction with Evolutionary Nearest Neighbors.

Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in Lung Cancer Dataset

Estimating Error-Dimensionality Relationship for Gene Expression Based Cancer Classification

Feature Subset Selection Algorithm for Elevated Dimensional Data By using Fast Cluster

A FAST CLUSTERING-BASED FEATURE SUBSET SELECTION ALGORITHM

Flexible-Hybrid Sequential Floating Search in Statistical Feature Selection

Feature Selection for Supervised Classification: A Kolmogorov- Smirnov Class Correlation-Based Filter

Features: representation, normalization, selection. Chapter e-9

Unsupervised Feature Selection for Sparse Data

Noise-based Feature Perturbation as a Selection Method for Microarray Data

Feature Selection and Classification for Small Gene Sets

A Heart Disease Risk Prediction System Based On Novel Technique Stratified Sampling

Data Cleaning and Prototyping Using K-Means to Enhance Classification Accuracy

Feature Subset Selection Utilizing BioMechanical Characteristics for Hand Gesture Recognition

Gene Expression Based Classification using Iterative Transductive Support Vector Machine

Review of feature selection techniques in bioinformatics by Yvan Saeys, Iñaki Inza and Pedro Larrañaga.

A Feature Selection Method to Handle Imbalanced Data in Text Classification

Feature-weighted k-nearest Neighbor Classifier

Using Recursive Classification to Discover Predictive Features

Feature Selection in Knowledge Discovery

Keyword Extraction by KNN considering Similarity among Features

AN ENSEMBLE OF FILTERS AND WRAPPERS FOR MICROARRAY DATA CLASSIFICATION

Machine Learning Techniques for Data Mining

Online Streaming Feature Selection

Using Google s PageRank Algorithm to Identify Important Attributes of Genes

Correlation Based Feature Selection with Irrelevant Feature Removal

Information Driven Healthcare:

SSV Criterion Based Discretization for Naive Bayes Classifiers

An Effective Performance of Feature Selection with Classification of Data Mining Using SVM Algorithm

The Role of Biomedical Dataset in Classification

A Modified K-Nearest Neighbor Algorithm Using Feature Optimization

Keywords: clustering algorithms, unsupervised learning, cluster validity

Infrequent Weighted Itemset Mining Using SVM Classifier in Transaction Dataset

Weighting and selection of features.

International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18, ISSN

Data Mining and Data Warehousing Classification-Lazy Learners

Statistical Pattern Recognition

Slides for Data Mining by I. H. Witten and E. Frank

Study on Classifiers using Genetic Algorithm and Class based Rules Generation

Cluster homogeneity as a semi-supervised principle for feature selection using mutual information

Chapter 12 Feature Selection

SVM Classification in -Arrays

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

A Wrapper-Based Feature Selection for Analysis of Large Data Sets

Individual feature selection in each One-versus-One classifier improves multi-class SVM performance

An Information-Theoretic Approach to the Prepruning of Classification Rules

Evolving SQL Queries for Data Mining

Statistical Pattern Recognition

An Empirical Study of Lazy Multilabel Classification Algorithms

A study of classification algorithms using Rapidminer

FEATURE EXTRACTION TECHNIQUES USING SUPPORT VECTOR MACHINES IN DISEASE PREDICTION

Best Agglomerative Ranked Subset for Feature Selection

Reihe Informatik 10/2001. Efficient Feature Subset Selection for Support Vector Machines. Matthias Heiler, Daniel Cremers, Christoph Schnörr

e-ccc-biclustering: Related work on biclustering algorithms for time series gene expression data

Information theory methods for feature selection

Feature Selection with Decision Tree Criterion

Cluster based boosting for high dimensional data

FEATURE SELECTION TECHNIQUES

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

Statistical Pattern Recognition

Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest

Feature Selection Based on Relative Attribute Dependency: An Experimental Study

Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets

Exploratory data analysis for microarrays

Survey on Rough Set Feature Selection Using Evolutionary Algorithm

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique

Gene selection through Switched Neural Networks

Searching for Interacting Features

Classification Using Unstructured Rules and Ant Colony Optimization

Feature Selection for Multi-Class Imbalanced Data Sets Based on Genetic Algorithm

Clustering-Based Feature Selection Framework for Microarray Data

WRAPPER feature selection method with SIPINA and R (RWeka package). Comparison with a FILTER approach implemented into TANAGRA.

Data Preprocessing. Data Preprocessing

Feature Selection/Reduction and Classification in large Datasets using Data Mining Concepts: A Review

The importance of adequate data pre-processing in early diagnosis: classification of arrhythmias, a case study

Encoding Words into String Vectors for Word Categorization

WEIGHTED K NEAREST NEIGHBOR CLASSIFICATION ON FEATURE PROJECTIONS 1

Anomaly Detection on Data Streams with High Dimensional Data Environment

[Sabeena*, 5(4): April, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

Performance Analysis of Data Mining Classification Techniques

String Vector based KNN for Text Categorization

Transcription:

PREPROCESSING THE FEATURE SELECTION ON MINING ALGORITHM - REVIEW G.VENKATESWARAN ASSISTANT PROFESSOR, Department of IT &BCA, NPR ARTS AND SCIENCE COLLEGE, NATHAM, ABSTRACT Filtration is the process by which it can refine the unwanted data from dataset. It is the major process in data mining technique. It makes perfection of the dataset to process a learning algorithm on it. Many algorithms can be very helpful to process the filtered data from metadata. Data from UCI data repository can be procedure of proposed method and evaluate the data from it. This Paper mainly focused on preprocessing the data and evaluates the data. In this feature selection, it can be used evaluate each attributes by filtering based algorithm. It may also used to classify the whole dataset by using clustering and other specifying algorithm. Finally, Preprocessing data can be used to refine the data from whole dataset. KEYWORDS: Filtration; preprocessing; INTRODUCTION Data mining is the exploration and analysis of large quantities of data in order to discover valid, novel, potentially useful and ultimately understandable patterns in data.data mining is the process of extracting information or patterns from large database, data warehouses, XML repository, etc. Also data mining is known as one of the core processes of Knowledge Discovery in Database (KDD).In machine learning and statistics, feature selection also known as attribute selection or variable subset selection. It is the process of selecting a subset of relevant features for model construction. Feature selection techniques are a subset of the more general field of feature extraction. Feature extraction can creates new attributes that obtained as original attributes of the dataset, whereas attributes filtering that returns a relevant set of the attributes. An attribute selection is generally categories into main four categories as Filter, Wrapper, Embedded and Hybrid Method. Wrapper Method A predictive model is used in wrapper method score feature subsets. To train a model each new subset is used, which is tested on a hold-out set. On that hold-out set, counting the number of mistakes made (the error rate of the model) gives the score for that subset. G.VENKATESWARAN 1

Filter Method Instead of the error rate, proxy measure is used in filter method to score a feature subset. This measure is very fast to compute. Filters are less computationally intensive than wrappers, they produce feature set which is not tuned to a specific type of predictive model. Embedded Method Embedded method is a catch-all group of techniques that perform feature selection as part of the model construction process. These approaches are between filters and wrappers in terms of computational complexity. The embedded method incorporate feature selections and are specific to given learning algorithm so it is efficient than other methods. Hybrid Method Filter and wrapper method are combined in Hybrid method. To reduce the search space, filter method is used, that will be considered by subsequent wrapper. These methods mainly focus on combination of filter and wrapper methods in order to achieve best performance with particular learning algorithm with similar time complexity of the filter methods. LITERATURE REVIEW: In paper [1], Data mining has one of the key problems that arise in a great variety of fields, including pattern recognition and machine learning, is the so-called feature selection. It can be defined as finding M relevant features from N original features. Algorithms that perform feature selection can generally be categorized into two classes: Wrappers and Filters. The former consider the feature selection as a preprocessing step and independent of the learning algorithm; for the latter, feature selection is wrapped around the learning algorithm and the result of learning algorithm is used as the evaluation criterion. In general, the characteristics of Filters are low time cost and not better effect, on the contrary, the time cost of Wrappers is high for its calling the learning algorithm to evaluate candidate subset of considered features, but the effect is better to predetermined learning algorithm. In recent years, data has become increasingly larger in both the number of instances and the number of features, when the number of features is very large, the Filters model is usually chosen due to its computation efficiency or apply Filters to reduce the dimensionality of feature set before Wrappers. Generally, the characteristics of problems with high-dimensional feature set are described as Large numbers of features, Many irrelevant features, Many redundant features, Noisy data. In paper [2], the development of feature selection has two major directions. One is the filters [8] and the other is the wrappers [9]. The filters work fast using a simple measurement, but its result is not always satisfactory. On the other hand, the wrappers guarantee good results through examining learning results, but it is very slow when applied to wide feature sets which contain hundreds or even thousands of features. Through the filters are very efficient in selecting features, they are unstable when performing on wide feature sets. This research tries to incorporate the wrappers to deal with this problem. It is not a pure wrapper procedure, but rather a hybrid feature selection model which utilizes both filter and wrapper methods. In these method, two feature sets are first filtered out by F-score and information gain, G.VENKATESWARAN 2

respectively. The feature sets are then combined and further tuned by a wrapper procedure. We take advantages of both the filter and the wrapper. It is not as fast as a pure filter, but it can achieve a better result than a filter does. Most importantly, the computational time and complexity can be reduced in comparison to a pure wrapper. The hybrid mechanism is more feasible in real bioinformatics applications which usually involve a large amount of related features. In the experiments, we applied the proposed hybrid feature selection mechanism to the problems of disordered protein prediction [10] and gene selection of microarray cancer data [11].The effective feature selection is always very helpful. In paper [3], an important challenge in the problem of classification of high-dimensional data is to design a learning algorithm that can construct an accurate classifier that depends on the smallest possible number of attributes. Further, it is often desired that there should be realizable guarantees associated with the future performance of such feature selection approaches See, for instance, a recent algorithm proposed by [12] involving the identification of a gene subset based on importance ranking and, subsequently, combinations of genes for classification. The traditional methods used for classifying high dimensional data are often characterized as either filters (e.g., [12], [13]) or wrappers (e.g., [14]), depending on whether the attribute selection is performed independent of, or in conjunction with, the base learning algorithm. The proposed approaches are a step toward more general learning strategies that combine feature selection with the classification algorithm and have tight realizable guarantees. In paper [4], With the aim of choosing a subset of good features with respect to the target concepts, feature subset selection is an effective way for reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility [15], [16]. Many feature subset selection methods have been proposed and studied for machine learning applications. They are divided into four broad categories, Embedded, Wrapper, Filter, and Hybrid approaches. The embedded methods incorporate feature selection as a part of the training process and are usually specific to given learning algorithms, and therefore may be more efficient than the other three categories [17]. The wrapper methods use the predictive accuracy of a predetermined learning algorithm to determine the goodness of the selected subsets, the accuracy of the learning algorithms is usually high. However, the generality of the selected features is limited and the computational complexity is large. The filter methods are independent of learning algorithms, with good generality. The hybrid methods are a combination of filter and wrapper methods [18-22] by using a filter method to reduce search space that will be considered by the subsequent wrapper. They mainly focus on combining filter and wrapper methods to achieve the best possible performance with a particular learning algorithm with similar time complexity of the filter methods. The wrapper methods are computationally expensive. The filter methods, in addition to their generality, are usually a good choice when the number of features is very large. Thus, we will focus on the filter method in this paper. With respect to the filter feature selection methods, the application of cluster analysis has been demonstrated to be more effective than traditional feature selection algorithms. In paper [5], Feature selection is the process of detecting the relevant features and discarding the irrelevant ones. A correct selection of the features can lead to an improvement of the inductive learner, G.VENKATESWARAN 3

either in terms of learning speed, generalization capacity or simplicity of the induced model. There are some other benefits associated with a smaller number of features: a reduced measurement cost and hopefully a better understanding of the domain. There are several situations that can hinder the process of feature selection, such as the presence of irrelevant and redundant features, noise in the data or interaction between attributes. In the presence of hundreds or thousands of features, such as DNA microarray analysis, researchers notice [23, 24] that is common that a large number of features is not informative because they are either irrelevant or redundant with respect to the class concept. Moreover, when the number of features is high but the number of samples is small, machine learning gets particularly difficult, since the search space will be sparsely populated and the model will not be able to distinguish correctly the relevant data and the noise [25]. There exist two major approaches in feature selection: individual evaluation and subsetevaluation. Individual evaluation is also known as feature ranking [26] and assesses individual features by assigning them weights according to their degrees of relevance. On the other hand, subset evaluation produces candidate feature subsets based on a certain search strategy. Besides this classification, feature selection methods can also be divided into three models: filters, wrappers and embedded methods [27].With such a vast body of feature selection methods, the need arises to find out some criteria that enable users to adequately decide which algorithm to use (or not) in certain situations. This work reviews several feature selection methods in the literature and checks their performance in an artificial controlled experimental scenario, contrasting the ability of the algorithms to select the relevant features and to discard the irrelevant ones without permitting noise or redundancy to obstruct this process. In paper [6], a supervised learning algorithm receives a set of labeled training examples, each with a feature vector and a class. The presence of irrelevant or redundant features in the feature set can often hurt the accuracy of the induced classifier [28]. Feature selection, the process of selecting a feature subset from the training examples and ignoring features not in this set during induction and classification, is an effective way to improve the performance and decrease the training time of a supervised learning algorithm. Feature selection typically improves classifier performance when the training set is small without significantly degrading performance on large training sets [29]. Feature selection is sometimes essential to the success of a learning algorithm.feature selection can reduce the number of features to the extent that such an algorithm can be applied.algorithms used for selecting features prior to concept induction fall into two categories. Wrapper methods wrap the feature selection around the induction algorithm to be used, using cross-validation to predict the benefits of adding or removing a feature from the feature subset used. Filter methods are general preprocessing algorithms that do not rely on any knowledge of the algorithm to be used. There are strong arguments in favor of both methods. This paper presents a careful analysis of arguments for both methods. It also introduces a new method of feature selection that is based on the concept of boosting from computational learning theory and combines the advantages of filter and wrapper methods. Like filters, it is very fast and general, while at the same time using knowledge of the learning algorithm to inform the search and provide a natural stopping criterion. We present empirical results using two different wrappers and three variants of our algorithm. In paper [7], Feature subset selection (FSS) is one of the techniques to preprocess the data before we perform any data mining tasks, e.g., classification and clustering. FSS is to identify a subset of G.VENKATESWARAN 4

original features/variables [30] from a given data set while removing irrelevant and/or redundant features [31]. The objectives of FSS are to improve the prediction performance of the predictors, to provide faster and more cost-effective predictors, and to provide a better understanding of the underlying process that generated the data. ANALYSIS: REFERENCE ID PROBLEM DATASET DATA FEATURE SELECTION STATUS 1.Efficient feature selection for highdimensional data using two-level filters Feature subset selection contain high correlated to each other. More irrelevant and redundancy. While using genetic time expense is high. UCI dataset Ionosphere Sonar Spectf Multi-feature Using ReliefF& KNNC to remove irrelevant and redundancy data. ALGORITHM K-Nearest Neighbors cluster algorithm 2.Hybrid selection by combining filters and wrappers Feature subset selection has not accurate and fast. Disordered protein dataset and Microarray dataset. AML and ALL Lung cancer Using three step procedure such as preliminary screening, combination and fine tuning the feature should accurate and fast. Filters Vs Wrappers Hybrid Feature selection 3.Feature selection with conjunctions of decisions stumps and learning from microarray data Learning from high-dimensional data of DNA microarray Microarray dataset Colon Leukemia B_MD and C_MD Lung BreastER Celliac disease Colon epithelial biopsies multiple mylenoma and bone lesion By using three learning algorithm, the microarray dataset process with highdimensional data. An Occam s Razor Learning Algorithm A PAC-Bayes Learning Algorithm G.VENKATESWARAN 5

4. A fast-clustering based feature subset selection algorithm for high-dimensional data 5. A review of feature selection methods on synthetic data 6.Filters,Wrappers and a boosting based hybrid for feature selection Irrelevant and redundant data. Some Irrelevant can be removed but redundant data remains in it. Irrelevant, redundant and noisy in the data Both Filter and Wrapper method has some 35 Bench Marks dataset Artificial dataset Multi-class dataset Chess mfeat-fourier coil2000 elephant arrhythmia fqs-nowe colon fbis.wc AR10P PIE10P oh0.wc oh10.wc B-cell1, cell2 cell3, base-hock TOX-171 tr12.wc tr23.wc tr11.wc embryonal-tumours leukemia1 leukemia2 tr21.wc wap.wc PIX10P ORL10P CLL-SUB-111 ohscal.wc la2s.wc la1s.wc GCM SMK-CAN-187 new3s.wc GLA-BRA-180 Corral Corral-100 XOR-100 Parity3+3 Led-25 Led-100 Monk3 SD1* SD2* SD3* Madelon Vote Chess Mushroom DNA Using FAST algorithm and t clusters technique, the irrelevant and redundant data can be removed. Using filters, embedded and wrappers methods used to filters it and ranking the data by ranker method. Using BBHFS(hybrid algorithm), it can merge both methods Fast algorithm Filters method Embedded method Wrappers method Hybrid algorithm Naive Bayes (NB) ID3 with χ2 G.VENKATESWARAN 6

7. Feature subset selection and ranking for multi-variate time series drawbacks to retrieve the significant data from large dataset. Feature selection cannot be specified with ranking and time to get the data from it. 1.HumanGait dataset 2.Brain Computer interface (BCI) data set at the Max Planck Institute (MPI) 3.Brain and Behavior Correlates of Arm Rehabilitation (BCAR) kinematics data set Lymphography Ads - to getting the relevant data. Using many method and algorithm it can specified the data with rank and also calculate the time of the data. pruning (ID3) k- Nearest Neighbors (knn). PC and DCPC CLeVer-Rank. CLeVer Cluster CLeVer-Hybrid CONCLUSION: This literature paper provides distinct types of existing feature selection techniques. Feature selection is one of the important process in data mining which can be used to reduce the irrelevant data and discover the significant data from the dataset. This data mining techniques can help to improve the classifier accurate. It ma y also process in other applications such as medical field etc.. The study of distinct techniques of data mining can concluded that there is a novel method for handling insignificant feature in high dimensional datasets. REFERENCES 1) Ferreira and M. Figueiredo, Efficient feature selection filtersfor high-dimensional data, Pattern Recognit. Lett., vol. 33, no. 13,pp. 1794 1804, 2012. 2) H.-H. Hsu, C.-W. Hsieh, and M.-D. Lu, Hybrid feature selectionby combining filters and wrappers, Expert Syst. Appl., vol. 38,no. 7, pp. 8144 8150, 2011. 3) M. Shah, M. Marchand, and J. Corbeil, Feature selection withconjunctions of decision stumps and learning from microarraydata, IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 1,pp. 174 186, Jan. 2012. 4) Qinbao Song, Jingjie Ni and Guangtao Wang A Fast Clustering-Based Feature SubsetSelection Algorithm for High Dimensional Data IEEE transactions on knowledge and data engineering VOL:25 NO:1 YEAR 2013 5) VerónicaBolón-Canedo Noelia Sánchez-MaroñoAmparo Alonso-Betanzos A review of feature selection methods on synthetic data KnowlInfSyst (2013) 34:483 519 6) S. Das, Filters, wrappers and a boosting-based hybrid for featureselection, in Proc. 18th Int. Conf. Mach. Learn., 2001, pp. 74 81. 7) Hyunjin Yoon, Kiyoung Yang, and Cyrus Shahabi, Feature subset selection and ranking G.VENKATESWARAN 7

Formultivariate time series, IEEE transactions on knowledge and data engineering, vol. 17, no. 9, september 2005. 8) Liu, H., Dougherty, E. R., Dy, J. G., Torkkola, K., Tuv, E., Peng, H., et al. (2005). Evolving feature selection. Intelligent Systems IEEE, 20(6), 64 76. 9) Kohavi, R., & John, G. (1997). Wrappers for feature subset selection. ArtificialIntelligence, 97, 273 324. 10) Linding, R., Jensen, L. J., Diella, F., Bork, P., Gibson, T. J., & Russell, R. B. (2003). Protein disorder prediction: Implications for structural proteomics. Structure, 11(11), 1453 1459. 11) Guyon, I., Weston, J., Barnhill, S., &Vapnik, V. (2002). Gene selection for cancer classification using support vector machines. Machine Learning, 46(1 3), 389 422. 12) L. Wang, F. Chu, and W. Xie, Accurate Cancer Classification Using Expressions of Very Few Genes, IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 4, no. 1, pp. 40-53, Jan.-Mar. 2007. 13) T.S. Furey, N. Cristianini, N. Duffy, D.W. Bednarski, M. Schummer, and D. Haussler, Support Vector Machine Classification and Validation of Cancer Tissue Samples UsingMicroarray Expression Data, Bioinformatics, vol. 16, pp. 906-914, 2000. 14) Guyon, J. Weston, S. Barnhill, and V. Vapnik, Gene Selection for Cancer Classification Using Support Vector Machines, Machine Learning, vol. 46, pp. 389-422, 2002. 15) Liu H., Motoda H. and Yu L., Selective sampling approach to active featureselection, Artif. Intell., 159(1-2), pp 49-74 (2004). 16) Molina L.C., Belanche L. and Nebot A., Feature selection algorithms: Asurvey and experimental evaluation, in Proc. IEEE Int. Conf. Data Mining,pp 306-313, 2002. 17) Guyon I. and Elisseeff A., An introduction to variable and feature selection,journal of Machine Learning Research, 3, pp 1157-1182, 2003. 18) Ng A.Y., On feature selection: learning with exponentially many irrelevantfeatures as training examples, In Proceedings of the Fifteenth InternationalConference on Machine Learning, pp 404-412, 1998. 19) Das S., Filters, wrappers and a boosting-based hybrid for feature Selection,In Proceedings of the Eighteenth International Conference on MachineLearning, pp 74-81, 2001. 20) Xing E., Jordan M. and Karp R., Feature selection for high-dimensionalgenomic microarray data, In Proceedings of the Eighteenth InternationalConference on Machine Learning, pp 601-608, 2001. 21) Souza J., Feature selection with a general hybrid algorithm, Ph.D, Universityof Ottawa, Ottawa, Ontario, Canada, 2004. 22) Yu J., Abidi S.S.R. and Artes P.H., A hybrid feature selection strategy forimage defining features: towards interpretation of optic nerve images, InProceedings of 2005 International Conference on Machine Learning andcybernetics, 8, pp 5127-5132, 2005. 23) Yang Y, Pederson JO (2003) A comparative study on feature selection in text categorization. In: Proceedings of the 20th international conference on machine learning, pp 856 863 24) Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5:1205 1224. 25) Provost F (2000) Distributed data mining: scaling up and beyond. In: Kargupta H, Chan P (eds) Advances in distributed data mining. Morgan Kaufmann, San Francisco 26) Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. JMach Learn Res 3:1157 1182 27) Guyon I, Gunn S, Nikravesh M, Zadeh L (2006) Feature extraction, foundations and applications. Springer, Heidelberg. 28) John, G. H., Kohavi, R., &Pfleger, K. (1994). Irrelevant features and the subset selection problem. Proceedings of ICML-94. 29) Hall, M. A. (1999). Correlation based feature selection for machine learning. Doctoral dissertation, The University of Waikato, Department of Comp. Sci. 30) H. Liu, L. Yu, M. Dash, and H. Motoda, Active Feature SelectionUsing Classes, Proc. Pacific-Asia Conf. Knowledge Discovery anddata Mining, 2003. 31) Tucker, S. Swift, and X. Liu, Variable Grouping in MultivariateTime Series Via Correlation, IEEE Trans. Systems, Man,and Cybernetics B, vol. 31, no. 2, 2001. G.VENKATESWARAN 8