Epitopes Toolkit (EpiT) Yasser EL-Manzalawy August 30, 2016

Similar documents
User Guide Written By Yasser EL-Manzalawy

Application of Support Vector Machine In Bioinformatics

WEKA homepage.

Data Mining: STATISTICA

WEKA: Practical Machine Learning Tools and Techniques in Java. Seminar A.I. Tools WS 2006/07 Rossen Dimov

Outline. Prepare the data Classification and regression Clustering Association rules Graphic user interface

Using Weka for Classification. Preparing a data file

Short instructions on using Weka

Classification using Weka (Brain, Computation, and Neural Learning)

Tutorial on Machine Learning Tools

Data Mining Practical Machine Learning Tools and Techniques. Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank

COMP s1 - Getting started with the Weka Machine Learning Toolkit

Effect of Principle Component Analysis and Support Vector Machine in Software Fault Prediction

TUBE: Command Line Program Calls

6.034 Design Assignment 2

Weka ( )

IMPLEMENTATION OF CLASSIFICATION ALGORITHMS USING WEKA NAÏVE BAYES CLASSIFIER

Data Mining. Lab 1: Data sets: characteristics, formats, repositories Introduction to Weka. I. Data sets. I.1. Data sets characteristics and formats

An Empirical Study of Lazy Multilabel Classification Algorithms

AlignMe Manual. Version 1.1. Rene Staritzbichler, Marcus Stamm, Kamil Khafizov and Lucy R. Forrest

Tutorial Case studies

Bioinformatics explained: Smith-Waterman

Individualized Error Estimation for Classification and Regression Models

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

WEKA Explorer User Guide for Version 3-4

Using Google s PageRank Algorithm to Identify Important Attributes of Genes

Weka: Practical machine learning tools and techniques with Java implementations

Bioinformatics explained: BLAST. March 8, 2007

Data Mining. Introduction. Piotr Paszek. (Piotr Paszek) Data Mining DM KDD 1 / 44

DATA ANALYSIS WITH WEKA. Author: Nagamani Mutteni Asst.Professor MERI

A Cloud Framework for Big Data Analytics Workflows on Azure

Machine Learning in Action

Slides for Data Mining by I. H. Witten and E. Frank

Mathematical Themes in Economics, Machine Learning, and Bioinformatics

I211: Information infrastructure II

CS299 Detailed Plan. Shawn Tice. February 5, The high-level steps for classifying web pages in Yioop are as follows:

Lab Exercise Three Classification with WEKA Explorer

Technical University of Munich. Exercise 8: Neural Networks

Package signalhsmm. August 29, 2016

Introduction to Automated Text Analysis. bit.ly/poir599

Semi-Supervised Abstraction-Augmented String Kernel for bio-relationship Extraction

MS1b Statistical Data Mining Part 3: Supervised Learning Nonparametric Methods

Bio3D: Interactive Tools for Structural Bioinformatics.

Data mining: concepts and algorithms

WEKA KnowledgeFlow Tutorial for Version 3-5-6

RAPIDMINER FREE SOFTWARE FOR DATA MINING, ANALYTICS AND BUSINESS INTELLIGENCE

Summary. RapidMiner Project 12/13/2011 RAPIDMINER FREE SOFTWARE FOR DATA MINING, ANALYTICS AND BUSINESS INTELLIGENCE

Lecture #11: The Perceptron

Feature Selection Using Modified-MCA Based Scoring Metric for Classification

Automatic Labeling of Issues on Github A Machine learning Approach

Contents. ACE Presentation. Comparison with existing frameworks. Technical aspects. ACE 2.0 and future work. 24 October 2009 ACE 2

Fraud Detection Using Random Forest Algorithm

Machine Learning Techniques for Data Mining

Feature Selection in Learning Using Privileged Information

Efficient Pairwise Classification

EECS730: Introduction to Bioinformatics

Practical Data Mining COMP-321B. Tutorial 4: Preprocessing

Evaluation of different biological data and computational classification methods for use in protein interaction prediction.

Performance Evaluation of Different Classifier for Big data in Data mining Industries

Gnome Data Mine Tools Evaluation Report

Tutorial 2: Analysis of DIA/SWATH data in Skyline

Implementing Breiman s Random Forest Algorithm into Weka

New String Kernels for Biosequence Data

Tour-Based Mode Choice Modeling: Using An Ensemble of (Un-) Conditional Data-Mining Classifiers

Chapter 5: Summary and Conclusion CHAPTER 5 SUMMARY AND CONCLUSION. Chapter 1: Introduction

Data Mining: Classifier Evaluation. CSCI-B490 Seminar in Computer Science (Data Mining)

NUS-I2R: Learning a Combined System for Entity Linking

Data Mining Practical Machine Learning Tools And Techniques With Java Implementations The Morgan Kaufmann Series In Data Management Systems

The Effect of Inverse Document Frequency Weights on Indexed Sequence Retrieval. Kevin C. O'Kane. Department of Computer Science

Collective Intelligence in Action

CS145: INTRODUCTION TO DATA MINING

Classification Algorithms in Data Mining

Comparative Study of Instance Based Learning and Back Propagation for Classification Problems

Performance Analysis of Data Mining Classification Techniques

Multiple-Choice Questionnaire Group C

Automatic Domain Partitioning for Multi-Domain Learning

A Comparative Study of Locality Preserving Projection and Principle Component Analysis on Classification Performance Using Logistic Regression

Semi-supervised Learning

A Survey on Postive and Unlabelled Learning

Function Algorithms: Linear Regression, Logistic Regression

A Critical Study of Selected Classification Algorithms for Liver Disease Diagnosis

Multi-label classification using rule-based classifier systems

Classification and Regression

Programming assignment for the course Sequence Analysis (2006)

Noise-based Feature Perturbation as a Selection Method for Microarray Data

ECLT 5810 Evaluation of Classification Quality

Practical Data Mining COMP-321B. Tutorial 5: Article Identification

AI32 Guide to Weka. Andrew Roberts 1st March 2005

An Adaptive Framework for Multistream Classification

Machine Learning Practical NITP Summer Course Pamela K. Douglas UCLA Semel Institute

Data Imbalance Problem solving for SMOTE Based Oversampling: Study on Fault Detection Prediction Model in Semiconductor Manufacturing Process

Jue Wang (Joyce) Department of Computer Science, University of Massachusetts, Boston Feb Outline

Taxonomically Clustering Organisms Based on the Profiles of Gene Sequences Using PCA

CS 8803 AIAD Prof Ling Liu. Project Proposal for Automated Classification of Spam Based on Textual Features Gopal Pai

CS570: Introduction to Data Mining

.. Fall 2011 CSC 570: Bioinformatics Alexander Dekhtyar..

Didacticiel - Études de cas

A Semi-Supervised Approach for Web Spam Detection using Combinatorial Feature-Fusion

Panorama Sharing Skyline Documents

GLOBEX Bioinformatics (Summer 2015) Multiple Sequence Alignment

Transcription:

Epitopes Toolkit (EpiT) Yasser EL-Manzalawy http://www.cs.iastate.edu/~yasser August 30, 2016 What is EpiT? Epitopes Toolkit (EpiT) is a platform for developing epitope prediction tools. An EpiT developer can distribute his predictor as a serialized Java object (model file). This allows other EpiT users to use his predictor on their own machines, rebuild the predictor on other datasets, or combine the predictor with other predictors to obtain a customized hybrid or consensus predictor. Overview of EpiT EpiT has two main components: i. Model builder, an application for building and evaluating epitope predictors and serializing these models in a binary format (model files) ii. Predictor, an application for applying a model to test data (e.g., set of epitopes or protein sequences). Model builder The model builder application is an extension of Weka [1], a well-known machine learning workbench supporting many standard machine learning algorithms. Weka provides tools for data pre-processing, classification, regression, clustering, validation, and visualization. Furthermore, Weka provides a framework for implementing new machine learning methods and data pre-processors. The model builder in EpiT offers the following extensions to Weka: i) a suite of data pre-processors (called filters in Weka) for converting epitope sequences into a vector of numerical features such that Weka supported methods can be applied to the data. The current implementation supports filters for converting epitope sequences into amino acid compositions, dipeptide compositions, amino acid pair propensities [2], compositiontransition-distribution (CTD) [3,4], and nominal attributes. Once epitope sequences have been converted into numeric or nominal features, any suitable Weka learner can be trained and evaluated on that datasets; ii) a 1

number of methods that can be directly (without applying any filters) trained and evaluated for qualitative and quantitative epitope predictions. The current implementation of EpiT provides classifiers for propensity scale methods (e.g., Parker s hydropholicity scale [5]), position specific scoring matrix (PSSM) [6], and a method for predicting MHC class II binding affinity using multiple-instance regression [7]. In addition, a meta-classifier for building a consensus predictor combining a group of predictors and a meta-classifier for building epitope predictors from highly unbalanced training datasets by randomly under-sampling instances from the majority class. More information about these extensions is provided in the Epit API documentation. Predictor The Predictor is a graphical user interface (GUI) for applying a model to a test datasets. Specifically, the user inputs the model file, the test data file, the output file name, the format of the test data (set of epitopes or fasta sequences), the type of the problem (peptide-based or residue-based) [8], and the length of the peptide/window sequence. The output of the predictor is a summary of the input model (model name, model parameters, and the name of the datasets used to build the model) followed by the predictions. The predictions are four tab-separated columns. The first column is the epitope/antigen identifier. The second and third columns are position and the sequence of the predicted peptide/residue sequence. The last column is the predicted scores. Installing and using EpiT EpiT is platform-independent since it is implemented in Java. For Installing EpiT, one needs to download it from the project web site and unzip the compressed file. For running EpiT, you need to add all the jar files included in the lib folder to the CLASSPATH and run the epit.jar file (See RunEpiT.bat as an example). The following command sets the CLASSPATH and runs EpiT: java Xmx512m -classpath "./epit.jar;./lib/weka.jar;./lib/readseq.jar;./lib/swing-layout- 1.0.3.jar;./lib/swing-worker-1.2.jar;." epit.gui.maingui 2

Example 1: Predicting linear B-cell epitopes using FBCPred model FBCPred [9] is a recent method for predicting flexible length linear B-cell epitopes using subsequence kernel. An implementation of this method is available on BCPREDS web server. However, users are restricted to submit one protein sequence at a time. In this example, we demonstrate how to use the Predictor application in EpiT and the FBCPred model file provided in the Examples folder to predict potential linear B-cell epitopes. 1. Run EpiT 2. Go to Application menu and select Predictor application 3. Press the Model button to view an open file dialog and use it to enter the./examples/models/fbcpred.model 4. Press the Test button to view an open file dialog and use it to enter the file containing the test sequences in fasta format./examples/data/test.fasta.txt 5. Press the Output button to view a save file dialog and use it to specify the path and the name of the file that the predictions will be outputted to (e.g.,./examples/fbcpred.test.out.txt ). 6. Set the peptide length to 14 (default value for FBCPred method). 7. Press the Predict button to get the predictions (See Figure 1). 8. Change the test file to./examples/data/abcpred.blind.txt. This is the blind test set published by Saha et al. [10]. 9. Set the output file to./examples/data/fbcpred.abcpred.out.txt. 10. Change the Input format to epitopes list. Note that the peptide length will be changed to -1. This implies that full-length test epitopes will be fed to the model for prediction without applying a sliding window to fix the length of the test peptides submitted to the classifier. 11. Press Predict button to get the predictions (See Figure 2). 3

Figure 1: Output predictions of applying FBCPred model to antigen sequences in test.fasta.txt. Figure 2: Output predictions of applying FBCPred model to ABCPred blind test set in abcpred.blind.txt. 4

Example 2: Developing a Position Specific Scoring Matrix (PSSM) for predicting 20-mer Predicting linear B-cell peptides 1- Run EpiT 2- Go to Application menu and select Model builder application. A modified version of Weka explorer will be displayed. 3- Press the open file button and use the open file dialog to open./examples/data/bcpred20.nr80.arff. This is the datasets that has been used to develop the 20-mer peptides classifier for BCPred method [11]. Each instance is 20 residues in length and is associated with a binary label to indicate whether the corresponding peptide is a linear B-cell epitope or not. Figure 3 provides some useful information about this dataset. 4- Click the Classify tab. 5- Click Choose button to select the classification method and select epit.classifiers.matrix.pssmclassifier (See Figure 4). 6- Click Start button to begin a 10-fold cross-validation test to evaluate the PSSM classifier on the BCPred 20-mer dataset. At the end, the program will output the PSSM matrix constructed using the entire training dataset and will also output several performance metrics obtained using the cross-validation test. For more details, please see the Weka explorer tutorial available at: 7- In the result panel, right click on the classifier name and select Save model from the popup menu and save the model as./examples/models/pssm.model (See Figure 5). 5

Figure 3: EpiT model builder, an extended version of Weka GUI explorer. Figure 4: Selecting the PSSM classifier. 6

Figure 5: Saving the PSSM model. Note that, the default setting for the PSSM method is to use negative data for estimating background probabilities. Alternatively, one can disable this option to assume uniform background probabilities. The performance of the PSSM model is that case is lower than the one obtained using negative training data to estimate the background probabilities (See Figure 6) 7

Figure 6: A reported poor performance of the PSSM model built using positive information only and assuming uniform background probabilities. Example 3: Developing a propensity scale based method for predicting linear B-cell epitopes 1- Run EpiT 2- Go to Application menu and select Model builder application. A modified version of Weka explorer will be displayed. 3- Press the open file button and use the open file dialog to open./examples/data/bcpred20.nr80.arff. 4- Click the Classify tab. 5- Click Choose button to select the classification method and select epit.classifiers.propensity.propensityscale. The default parameter settings for this method are: standard 20 amino acids alphabet, Parker s hydrophilicity scale, and window size = -1. 6- Click Start button to begin a 10-fold cross-validation test to evaluate the PSSM classifier on the BCPred 20-mer dataset. 8

7- In the result panel, right click on the classifier name and select Save model from the popup menu and save the model as./examples/models/parker.model. It should be mentioned that, the EpiT distribution includes 544 amino acid propensity scales extracted from AAIndex. Any of these scales can be used with the PropensityScale classifier instead of the default Parker s hydrophilicity scale. Example 4: Peptide-based and residue-based linear B-cell epitopes prediction using Parker s propensity scale 1. Run EpiT 2. Go to Application menu and select Predictor application 3. Press the Model button to view an open file dialog and use it to enter the./examples/models/parker.model 4. Press the Test button to view an open file dialog and use it to enter the file containing the test sequences in fasta format./examples/data/test.fasta.txt 5. Press the Output button to view a save file dialog and use it to specify the path and the name of the file that the predictions will be outputted to (e.g.,./examples/ parker.test.peptide.out.txt ). 6. Set the peptide length to 14. Note that, setting the window size to - 1 when building parker.model allows us to evaluate it using any Peptide/Window length. Otherwise, we have to use the exact size that has been specified during the training of the model. 7. Press the Predict button to get predictions for each 14-mer peptide in the test sequences. 8. Change the instance type to residue-based. 9. Set the window length to 7 (has to be an odd number) 10. Set the output file to parker.test.residue.out.txt 11. Press the Predict button to get prediction scores for each residue in the test sequences (See Figure 7). 9

Figure 7: Residue-based classification using parker.model. Example 5: Developing a Naïve Bayes classifier for predicting linear B- cell epitopes using amino acid composition information Because the majority of Weka implemented algorithms, including Naïve Bayes classifier, are not applicable on datasets with string attributes, EpiT provides a set of filters for converting epitope sequences into feature vectors. 1- Run EpiT 2- Go to Application menu and select Model builder application. A modified version of Weka explorer will be displayed. 3- Press the open file button and use the open file dialog to open./examples/data/bcpred20.nr80.arff. 4- Click the Classify tab. 5- Click Choose button to select the classification method and select weka.classifiers.meta.filteredclassifier. 10

6- Left-click on the classifier name to edit the FilteredClassifier properties. Set the classifier to weak.bayes.naivebayes. Set the filter to epit.filters.unsupervised.attribute.sequencecomposition. Click OK to close the properties window. 7- Click Start button to begin a 10-fold cross-validation test to evaluate the model on the BCPred 20-mer dataset. 8- In the result panel, right click on the classifier name and select Save model from the popup menu and save the model as./examples/models/nbac.model. Example 6: Developing a consensus predictor for predicting flexiblelength linear B-cell epitopes Let s assume that we may have several models for predicting flexible-length linear B-cell epitopes. Our goal is to combine the predictions of these models into a consensus prediction. In general, we expect the consensus method combining several methods to outperform any individual method. There are two ways of obtaining consensus predictions. First, one can use the Predictor application to apply every individual model on the test data. Then, the output predictions can be combined into a consensus prediction (e.g., via importing the predictions into an Excel sheet and combining them or by writing a simple script to combine these predictions). Second, one can use the weak.classifiers.meta.vote classifier and epit.classifiers.meta.modelbased to build a consensus predictor and use the Predictor application to apply this consensus predictor to the test data. 1- Run EpiT 2- Go to Application menu and select Model builder application. A modified version of Weka explorer will be displayed. 3- Press the open file button and use the open file dialog to open./examples/data/bcpred20.nr80.arff. 4- Click the Classify tab. 5- Click Choose button to select the classification method and select weka.classifiers.meta.vote. 6- Left-click on the classifier name to edit the Vote classifier properties. For the classifiers property, add two epit.classifiers.meta.modelbased classifiers and set their ModelFile property to./examples\models\fbcpred.model,./examples\models\parker.model, respectively. 11

7- Select use training set as the test option and click Start button to begin evaluating the consensus model on the BCPred 20-mer dataset. It should be noted that the FBCPred.model was built using FBCPred dataset and in this example the consensus model is evaluated on BCPred 20-mer dataset. Because both datasets were extracted from the BciPep database, the reported performance is expected to be overoptimistic. If your goal, is to evaluated a consensus model of combining FBCPred and Parker s hydrophilicity scale, then you should use the Vote to combine an SMO classifier with subsequence kernel (FBCPred method) and a PropensityScale classifier. 8- In the result panel, right click on the classifier name and select Save model from the popup menu and save the model as./examples/models/consensus.model. Figure 8: Setting the properties of the Vote classifier. 12

Example 7: Using EpiT to build a hybrid predictor Briefly, you can follow the approach described in Example 6 to use any Weka meta-classifier to build a hybrid model combining several existing models (each model will be encapsulated in a ModelBased classifier) or to build and evaluate a hybrid model combining several prediction methods. Example 8: Using EpiT to build semi-supervised predictors Semi-supervised learning offers a powerful approach for leveraging (often large amounts of) unlabeled data U together with modest amounts of labeled data L to train predictive models that are more accurate than those that could be trained using only the available labeled data. In this example, we show how to use the semi-supervised self-training algorithm to build a linear B- cell epitope prediction model that outperform its supervised learning counterpart model. We also demonstrate how to use potential labeled data (e.g., expert annotated data with no experimental validation) to further improve the performance of the self-training semi-supervised predictors. More details about these two algorithms are provided in [12]. 1- Run EpiT 2- Go to Application menu and select Model builder application. A modified version of Weka explorer will be displayed. 3- Press the open file button and use the open file dialog to open./examples/data/ssl/ BCPred16.nr80-L.arff. This is the labeled dataset for predicting linear B-cell epitopes. 4- Click the Classify tab. 5- Click Choose button to select the classification method and select epit.classifiers.ssl.selftrain. 6- Click the classifier panel to set the parameters of the self-training classifier as shown in Figure 9. Briefly, set the baseclassifier and finalclassifier to weka.tress.randomforest and set unlabeleddata to the full path to the file./examples/data/ssl/ BCPred16.nr80-U.arff. Then, click OK. 7- In the Test options panel, select Supplied test set and set the test set to./examples/data/ssl/ BCPred16.nr80-U.arff. 8- Click Start button to train a semi-supervised models using labeled data, BCPred16.nr80-L.arff, and unlabeled data, BCPred16.nr80- U.arff. The learned model will then be evaluated using the unlabeled 13

data BCPred16.nr80-U.arff and the evaluation performance will be reported (see Figure 10). Figure 9: Setting the parameters for SelfTraing classifier. 14

Figure 10: Performance of SelfTraining classifier trained using labaled and unlabeled data. In cases where potentially labeled data is available, SelfTrain algorithm could be set to leverage it to improve its predictive performance. To build self-training classifiers using labeled, unlabeled and potentially labeled data, follow the preceding procedure but in step 6 provide the full path for both labeled and unlabeled data (see Figure 11). The improved result is shown in Figure 12. 15

Figure 11: Updating the parameters for SelfTraing classifier to use potentially labeled data. 16

Figure 12: Performance of SelfTraining classifier trained using labaled, unlabeled, and potentially labeled data. Updating an existing model An interesting feature in EpiT is that it allows anyone to rebuild an existing model. Assume that you have augmented FBCPred dataset with newly reported epitopes data and your goal is to rebuild your own FBCPred model with the modified dataset. Note that in Figure 1, the Predictor application is reporting the classification method and the parameters that have been used to build the original FBCPred model. Therefore, to build your own updated FBCPred model, you can use this information and the Model builder application to evaluate and build your own model. 17

Extending EpiT EpiT is an open source project under the GNU General Public License (GPL). This assures that anyone can freely extend or change this software as long as the modified software will be licensed under the GNU GPL. We encourage bioinformatics developers to participate in EpiT by contributing new components (e.g., filters or machine learning methods), new epitope datasets in Weka accepted formats, or new epitope prediction tools in the form of model files. References [1] Witten, I., Frank, E., 2005. Data mining: Practical machine learning tools and techniques, 2nd Edition. Morgan Kaufmann. [2] Chen, J., Liu, H., Yang, J., Chou, K., 2007. Prediction of linear B-cell epitopes using amino acid pair antigenicity scale. Amino Acids 33, 423 428. [3] Cui, J., Han, L., Lin, H., Tan, Z., Jiang, L., Cao, Z., Chen, Y., 2006. MHC-BPS: MHC binder prediction server for identifying peptides of flexible lengths from sequence derived physicochemical properties. Immunogenetics 58, 607 613. [4] EL-Manzalawy, Y., Dobbs, D., Honavar, V., 2008a. On Evaluating MHC-II Binding Peptide Prediction Methods. PLoS ONE 3. [5] Parker, J., Guo, D and, H. R., 1986. New hydrophilicity scale derived from highperformance liquid chromatography peptide retention data: correlation of predicted surface residues with antigenicity and x-ray-derived accessible sites. Biochemistry 25, 5425 5432. [6] Henikoff, J., Henikoff, S., 1996. Using substitution probabilities to improve positionspecific scoring matrices. Bioinformatics 12, 135 143. [7] EL-Manzalawy, Y., Dobbs, D., Honavar, V., 2009. Predicting MHC-II binding affinity using multiple instance regression. Submitted to IEEE/ACM Trans Comput Biol Bioinform. 18

[8] EL-Manzalawy, Y., Dobbs, D., Honavar, V., 2008c. Predicting linear B- cell epitopes using evolutionary information. IEEE International Conference on Bioinformatics and Biomedicine. [9] EL-Manzalawy, Y., Dobbs, D., Honavar, V., 2008b. Predicting flexible length linear B-cell epitopes. 7th International Conference on Computational Systems Bioinformatics, 121 131. [10] Saha, S. and Raghava, G. (2006b). Prediction of continuous B-cell epitopes in an antigen using recurrent neural network. Proteins, 65:40-48. [11] EL-Manzalawy, Y., Dobbs, D., Honavar, V., 2008d. Predicting linear B-cell epitopes using string kernels. J. Mol. Recognit. 21, 243 255. [12] El-Manzalawy, Y., Munoz2, E., Lindner, S., Honavar, V., 2016. PlasmoSEP: Predicting Surface Exposed Proteins on the Malaria Parasite Using Semi-supervised Self-training and Expert-annotated Data. Submitted. 19