Voting Methods For Multiple Autonomous Agents
|
|
- Sybil Hardy
- 5 years ago
- Views:
Transcription
1 Voting Methods For Multiple Autonomous Agents J.R. Parker Laboratory for Computer Vision Department of Computer Science Univeristy of Calgary Calgary, Alberta, Canada Abstract The use of many diverse algorithms simultaneously applied to a single problem improves the robustness of the solution. This is certainly true of handprinted character recognition. The problem addressed here is that of combining the results from many agents to give a single result that represents a synthesis of the component agents. The recognition of hand printed characters has been the subject of a great deal of research due to the interesting nature of the problem, and to the enormous utility of a solution. The problem is a difficult one, at least partly because each individual writer produces a unique set of characters each time they write. To a computer the differences are large enough to pose a problem. Given the number and variety of methods offered for hand printed character recognition, it may very well be that there is no single method that can be called the best. Many diverse algorithms each have strengths and weaknesses, good ideas and bad. One way to take advantage of this variety is to apply many methods to the same recognition task, and have a scheme to merge the results; this should be successful over a wider range of inputs than would any individual method[park94]. The weaknesses should, in an ideal situation, more or less cancel out rather than reinforcing each other giving high recognition rates under many sets of conditions. This situation occurs generally in artificial intelligence, where combining the results from multiple autonomous agents has many applications. Here a collection of voting schemes are compared and used to join the results from five hand printed digit recognition algorithms, although the method is general enough to be applied to a variety of multiple agent problems. Each of the five methods have been used to classify a set of 1000 digits; the problem addressed here is that of computing a single classification in each case, given five individual classifications. 1 Merging multiple methods In the general case a classifier can produce one of three kinds of classification. The simplest and probably the most common is a simple expression of the class determined for the data object. For a digit classification scheme this would mean that the classifier would simply state this is a FIVE, for example; this will be called a type 1 response [XU92]. A classifier may also produce a ranking of the possible classes for a data object. In this case, the classifier may say: this is most likely a FIVE, but could be a THREE, and is even less likely to be a FOUR. The number of classes ranked may well vary from case to case, and probabilities are not associated with the ranking. This will be called a type 2 response. Finally, a classifier may give a probability or other such confidence rating to each of the possible classes. This is the most specific case of all, since either a ranking or a classification can be produced from it. In this case, each possible digit would be given a confidence number which can be normalized to any specific range. This will be called a type 3 response. Whatever the type of response, a multiple classifier must deal with three important problems: 1) The response of the multiple classifier must be the best one given the results of the individual classifiers. It should in some logical way represent the most likely true classification, even when presented with contradictory individual classifications. 2) The classifiers in the system may produce different types of response. These must be merged into a coherent single response. 3) The multiple classifier must yield the correct result more often than any of the individual classifiers, or there is no point. The first problem above has various potential solutions for each of the possible type of response, and these will be dealt with first.
2 2 Merging type 1 responses Given that the output of each of the classifiers is a single, simple classification value, the obvious way to combine them is by using a voting strategy. A majority voting scheme can be expressed as follows: let C i (x) be the result produced by classifier i for the digit image x, where there are k different classifiers in the system; then let H(x,d) be the number of classifiers giving a classification of d for the digit image x, where d is one of {0,1,2,3,4,5,6,7,8,9}. Then E( x) This is called a simple majority vote (SMV). An easy generalization of this scheme replaces the constant k/2 in the above expression with k*α for 0<α<1 [XU92]. This permits a degree of flexibility in deciding what degree of majority will be sufficient, and will be called a weighted majority vote (WMV). This scheme can be expressed as: E( x) j j if max(h(x,i))h(x,j) and H(x,j)> k otherwise if max(h(x,i))h(x,j) and H(x,j)>αk 10 otherwise Neither of these takes into account the possibility that all of the dissenting classifiers agree with each other. Consider the following cases: in case A there are ten classifiers, with six of them supporting a classification of 6, one supporting 5, one supporting 2 and two classifiers rejecting the input digit. In case B, using the same ten classifiers, six of them support the classification 6 and the other four all agree that it is a 5. Do cases A and B both support a classification of 6, and do they do so equally strongly? One way to incorporate dissent into the decision is to let max1 be the number of classifiers that support the majority classification j (max1 H(x,j)), and to let max2 be the number supporting the second most popular classification h (max2 H(x,h)). The the classification becomes: E( x) j if max(h(x,i))h(x,j) and max1-max2 αk 10 Otherwise where α is between 0.0 and 1.0. This is called a dissenting-weighted majority vote (DWMV). For the five classifier system being discussed, the SMV strategy gave the following resulkts: Correct: 994 Incorrect: 2 Rejected: 4 This is in spite of poor results (76% recognition) from the neural net classifier (#5) on nines - indeed, 100% of the nines are recognized by the multiple classifier. This begs the question what is the contribution of any one classifier to the overall result? To determine this for the SMV case is simple. The multiple classifier can be run using any four of the five individual classifiers, and the results can be compared against the five classifier case above to determine whether the missing classifier assisted in the classification. The results are: All Omit #1 Omit #2 Omit #3 Omit #4 Omit #5 Correct Incorrect Rejected In all but one case where a classifier is omitted the rejection rate increases, and sometimes the error rate increases as well. The decrease in the error rate that occurs when classifier 4 is omitted is evidence supporting its removal. Evaluation of WMV is a little more difficult, requiring an assessment of the effect of the value of α on the results. A small program was written that varied α from 0.05 to 0.95, classifying all sample digits on each iteration. This process was then repeated five more times, omitting one of the classifiers each time to again test the relative effect of each classifier on the overall success. With this much data a numerical value is needed that can be used to as-
3 sess the quality of the results. The recognition rate could be used alone, but this does not take into account that a rejection is much better than a misclassification; both would count against the recognition rate. A measure of reliability can be computed as: Reliability Recognition % Rejection The reliability value will be low when few misclassifications occur. Unfortunately, it will be high if recognition is only 50%, with the other 50% being rejections. This would not normally be thought of as acceptable performance. A good classifier will combine high reliability with a high recognition rate; in that case, why not simply use the product reliability*recognition as a measure of performance? In the 50/50 example above this measure would have the value 0.5: reliability is 100% (1.0) and recognition is 50% (0.5). In a case where the recognition rate was 50%, with 25% rejections and 25% misclassifications, this measure will have the value 0.333, indicating that the performance is not as good. The value reliability*recognition will be called acceptability.the first thing that should be done is to determine which value of α gives the best results, and this is more accurately done when the data is presented in tabular form: Table 1: Acceptability of the Multiple Classifier Using a Weighted Majority Vote Alpha All 5 Omit #1 Omit #2 Omit #3 Omit #4 Omit #5 Used Duplicate rows are not shown in the table. From this information it can be concluded that a should be between 0.45 and 0.5, for in this range the acceptability peaks without causing a drop in recognition rate. It can also be seen that the omission of classifier #4 causes the smallest decrease in acceptability, but that it does appear to make a positive contribution to the overall system in the WMV system. DWMV also uses the α parameter, and can be evaluated in a fashion identical to what has just been done for WMV. The optimal value of α, obtained from Table 2, was found to be The table also supports the removal of classifier #4, since the acceptability increases very slightly when that classifier is removed from the system. If the best overall multiple scheme also supports the removal of classifier #4 then it will be discarded. Table 2: Acceptability of the Multiple Classifier Using a Dissenting Weighted Majority Vote Alpha All 5 Omit #1 Omit #2 Omit #3 Omit #4 Omit # Converting between response types Before proceeding to analyze methods for merging type 2 responses (ranks) it would be appropriate to discuss means of converting one response type to another. In particular, not all of the classifiers yield a rank ordering, and this will be needed before merging the type 2 responses with those of types 1 and 3:
4 Type 3 to Type 1: Select the class having the maximum confidence rating as the response. Type 3 to Type 2: Sort the confidence ratings in descending order. The corresponding classes are in rank order. Type 2 to Type 1: Select the class having the highest rank as the type 1 response. Converting a type 1 response to a type 3 cannot be done in a completely general and reliable fashion. However, an approximation can be had based on the measured past performance of the particular algorithm. Each row in the confusion matrix represents the classifications actually encountered for a particular digit with that classifier expressed as a probability, and the columns represent the other classifications possible for a specified classification; this latter could be used as the confidence rating. The conversions from type 1 can be expressed as: Type 1 to Type 3: Compute the confusion matrix K for the classifier. If the classification in this case is j, then first compute: Now compute the type 3 response as a vector V, where s i 0 K( i, j) Type 1 to Type 2: Convert from type 1 to type 3 as above, then convert to type 2 from type 3 V() i 9 K( i, j) s 4 Merging type 2 responses The problem encountered when attempting to merge type 2 responses is as follows: given M rankings, each having N choices, which choice has the largest degree of support? For example, consider the following 3 voter/4 choice problem [STRA80] Voter 1: a b c d Voter 2: c a b d Voter 3: b d c a This case has no majority winner; a, b and c each get one first place vote. Intuitively, it seems reasonable to use the second place votes in this case to see if the situation resolves itself. In this case b receives two second place votes to a s one, which would tend to support b as the overall choice. In the general case there are a number of techniques for merging rank-ordered votes, four of which will be discussed here. The Borda count [BORD81,BLAC58] is a well-known scheme for resolving this kind of situation. Each alternative is given a number of points depending on where in the ranking it has been placed. A selection is given no points for placing last, one point for placing next to last, and so on up to N-1 points for placing first. In other words, the number of points given to a selection is the number of classes below it in the ranking. For the 3 voter/4 choice problem described above the situation is: Voter 1: a (3) b (2) c (1) d (0) Voter 2: c (3) a (2) b (1) d (0) Voter 3: b (3)d (2)c (1)a (0) where the points received by each selection appears in parentheses behind the choice. The overall winner is the choice receiving the largest total number of points: a b c d This gives choice b as the Borda winner. However, the Borda count does have a problem that might be considered serious. Consider the following 5 voter/3 choice problem: Voter 1: a b c Voter 2: a b c Voter 3: a b c Voter 4 : b c a Voter 5: b c a The Borda counts are a6, b7, c2, which selects b as the winner. However, a simple majority of the first place votes would have selected a! This violates the so-called majority criterion [STRA80]: If a majority of voters have an alternative X as their first choice, a voting rule should choose X. This is a weaker version of the Condorcet Winner Criterion [COND85]:
5 If there is an alternative X which could obtain a majority of votes in pair-wise contests against every other alternative, a voting rule should choose X as the winner. This problem may have to be taken into account when assessing performance of the methods. A procedure suggested by Thomas Hare [STRA80] falls into the category of an elimination process. The idea is to repeatedly eliminate undesirable choices until a clear majority support one of the remaining choices. Hare s method is as follows: if a majority of the voters rank choice X in first place, then X is the winner; otherwise, the choice with the smallest number of first place votes is removed from consideration, and the first place votes are re-counted. This elimination process continues until a clear majority supports one of the choices. The Hare procedure satisfies the majority criterion, but fails the Condorcet winner criterion as well as the monotonicity criterion: If X is a winner under a voting rule, and one or more voters change their preferences in a way favorable to X without changing to order in which they prefer any other alternative, then X should still be the winner. No rule that violates the monotonicity criterion will be considered as an option for the multiple classifier. This decision will eliminate the Hare procedure, but not the Borda count. With the monotonicity criterion in mind, two relatively simple rank merging strategies become interesting. The first is by Black[BLAC58], and chooses the winner by the Condorcet criterion if such a winner exists; if not, the Borda winner is chosen. This is appealing in its simplicity, and can be shown to be monotonic. Another strategy is the so-called Copeland rule [STRA80]: for each option compute the number of pair-wise wins of that option with all other options, and subtract from that the number of pair-wise losses. The overall winner is the class for which this difference is the greatest. In theory this rule is superior to the others discussed so far, but it has a drawback in that it tends to produce a relatively large number of tie votes in general. The Borda, Black, and Copeland rules were implemented as described and applied to the five classifier problem, and the results are summarized in Table 3. All methods supported the removal of classifier #4. Table 3: Results of the Voting Rules for Rank Ordering (Omit #4) Rule Recognition Error Rejection Reliability Acceptability Borda Black Copeland From this table it would appear that the Borda scheme is tied with Black, followed by Copeland. It is important to temper this view with the fact that this result was obtained from basically one observation. Confirmation would come from applying these schemes to a large number of sets of characters. Another consideration is that a voting scheme may err in favor of the correct classification when it should, in fact, be rejected. Upon careful analysis this was found to have happened for the Borda method applied to digit #267. The rankings were: Classifier 1:2 Classifier 2: Classifier 3:2 Classifier 4: Classifier 5:1 2 9 The Borda count for the one digit is 27, and for the two digit is 37, giving a classification of two even though the majority winner and the Condorcet winner is one! Thus, the Black scheme classifies this digit (correctly according to the votes, in my opinion) as a one. Given this problem, and the fact that Black and Borda are otherwise equally acceptable, my conclusion is that the Black classifier is slightly superior to the others. 5 Merging type 3 responses The five classifier system under discussion has no single classifier that gives a proper type 3 response, and only one that yields a reliable set of weights for each digit (#5, the neural net). Because of this, the problem of merging type 3 responses was not pursued with as much vigor as were the type 1 and 2 problems. Indeed, the solu-
6 tion may be quite simple. Suen [XU92] decides that any set of type 3 classifiers can be combined using an averaging technique. That is, k P E ( x C i x) 1 -- P k j x C i j 1 ( x), i1,...,m where PE is the probability associated with a given classification for the multiple classifier, and Pk is the probability associated with a given classification for each individual classifier k. The overall classification is the value j for which P E ( x x) is a maximum. C j There is little actual type 3 data, but it could be approximated by using the a posteriori method described previously, where it is used to convert type 1 responses to type 3 responses. Using this approximate data set, the result obtained by merging type 3 responses using averaging is given by: Correct: 997 Incorrect: 3 Rejected: 0 Acceptability is Results from the multiple classifier Using the acceptability measure to assess each of the merging methods discussed, we need to look only at the best method in each of the three groups; that is, the best multiple type 1 classifier, the best type 2, and the best type three. The best three are: Name Type Acceptability SMV Black Average From the table above it can be seen that the best classifier uses the Black scheme for merging rank ordered responses, and which omits classifier #4. This work was supported by a grant from the Natural Sciences and Engineering Research Council of Canada. 7 References [BLAC58] Black, D., The Theory of Committees and Elections, Cambridge University Press, [BORD81] Borda, Jean-Charles de., Memoire sur les Elections au Scrutin, Histoire de l Academie Royale des Sciences, Paris, [BRAM83] Brams, S.J and Fishburn, P.C., Approval Voting, Birkhauser, Boston, [COND85] Condorcet, Marquis de., Essai sur l application de l analyse a la probabilite des decisions rendues a la pluralite des voix, Paris, [ENEL84] Enelow, J.M. and Hinich, M.J., The Spatial Theory of Voting: An Introduction, Cambridge University Press, Cambridge, [FARQ69] Farquharson, R., Theory of Voting, Yale University Press, New Haven, [PARK94] Parker, J.R., Recognition of Hand Printed Digits Using Multiple/Parallel Methods, Third Golden West International Conference on Intelligent Systems, Las Vegas, June 6-9/94. [HO94] Ho, T.K., Hull, J.J., and Srihari, S.N., Decision Combination In Multiple Classifier Systems, IEEE- PAMI, Vol. 16 No. 1, Jan [KIMU91] Kimura, F. and Shridhar, M., Handwritten Numeral Recognition Based On Multiple Algorithms, Pattern Recognition, Vol. 24 No 10, [STRA80] Straffin, P.D. Jr., Topics in the Theory of Voting, Birkhauser, Boston, [XU92] Xu, L., Krzyzak, A., and Suen, C.Y., Methods Of Combining Multiple Classifiers And Their Application To Handwriting Recognition, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 22 No. 3, May/ June 1992.
Neural Voting Machines. Whitman Richards & Sebastian Seung Artificial Intelligence Lab MIT NE43-767, Cambridge, MA
Neural Voting achines Whitman Richards & Sebastian Seung Artificial Intelligence Lab IT NE43-767, Cambridge, A 02139 {wrichards,seung}@mit.edu Abstract In theories of cognition that view the mind as a
More informationCHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Introduction Pattern recognition is a set of mathematical, statistical and heuristic techniques used in executing `man-like' tasks on computers. Pattern recognition plays an
More informationFine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes
2009 10th International Conference on Document Analysis and Recognition Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes Alireza Alaei
More informationOn-line handwriting recognition using Chain Code representation
On-line handwriting recognition using Chain Code representation Final project by Michal Shemesh shemeshm at cs dot bgu dot ac dot il Introduction Background When one preparing a first draft, concentrating
More informationMATH 1340 Mathematics & Politics
MATH 1340 Mathematics & Politics Lecture 5 June 26, 2015 Slides prepared by Iian Smythe for MATH 1340, Summer 2015, at Cornell University 1 An example (Exercise 2.1 in R&U) Consider the following profile
More informationNeural voting machines
Neural Networks 19 (2006) 1161 1167 www.elsevier.com/locate/neunet 2006 Special Issue Neural voting machines Whitman Richards a,b,, H. Sebastian Seung b,c, Galen Pickard a a Computer Science and Artificial
More informationMATH 1340 Mathematics & Politics
MTH 1340 Mathematics & Politics Lecture 4 June 25, 2015 Slides prepared by Iian Smythe for MTH 1340, Summer 2015, at Cornell University 1 Profiles and social choice functions Recall from last time: in
More informationOne way ANOVA when the data are not normally distributed (The Kruskal-Wallis test).
One way ANOVA when the data are not normally distributed (The Kruskal-Wallis test). Suppose you have a one way design, and want to do an ANOVA, but discover that your data are seriously not normal? Just
More informationVoting. Xiaoyue Zhang
Voting Xiaoyue Zhang Preference Ballot Ordered list of candidates Assuming no ties Preference schedule = many preference ballots Alice's Preferences 1. Apple 2. Banana 3. Peach 4. Pear 5. Kiwi What is
More informationRobust line segmentation for handwritten documents
Robust line segmentation for handwritten documents Kamal Kuzhinjedathu, Harish Srinivasan and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) University at Buffalo, State
More informationWORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION Chen, W.; Gader, P.
University of Groningen WORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION Chen, W.; Gader, P. Published in: EPRINTS-BOOK-TITLE IMPORTANT NOTE: You are advised to consult the publisher's
More informationOn Adaptive Confidences for Critic-Driven Classifier Combining
On Adaptive Confidences for Critic-Driven Classifier Combining Matti Aksela and Jorma Laaksonen Neural Networks Research Centre Laboratory of Computer and Information Science P.O.Box 5400, Fin-02015 HUT,
More informationText-mining based journal splitting
Text-mining based journal splitting Xiaofan Lin Intelligent Enterprise Technology Laboratory HP Laboratories Palo Alto HPL-2001-137 (R.1) November 18 th, 2002* E-mail: xiaofan.lin@hp.com table of contents,
More informationMultiple Classifier Fusion using k-nearest Localized Templates
Multiple Classifier Fusion using k-nearest Localized Templates Jun-Ki Min and Sung-Bae Cho Department of Computer Science, Yonsei University Biometrics Engineering Research Center 134 Shinchon-dong, Sudaemoon-ku,
More informationQoS Based Ranking for Composite Web Services
QoS Based Ranking for Composite Web Services F.Ezhil Mary Arasi 1, Aditya Anand 2, Subodh Kumar 3 1 SRM University, De[partment of Computer Applications, Kattankulathur, Chennai, India 2 SRM University,
More informationPrototype Selection for Handwritten Connected Digits Classification
2009 0th International Conference on Document Analysis and Recognition Prototype Selection for Handwritten Connected Digits Classification Cristiano de Santana Pereira and George D. C. Cavalcanti 2 Federal
More informationIndoor Object Recognition of 3D Kinect Dataset with RNNs
Indoor Object Recognition of 3D Kinect Dataset with RNNs Thiraphat Charoensripongsa, Yue Chen, Brian Cheng 1. Introduction Recent work at Stanford in the area of scene understanding has involved using
More informationInformation Fusion Dr. B. K. Panigrahi
Information Fusion By Dr. B. K. Panigrahi Asst. Professor Department of Electrical Engineering IIT Delhi, New Delhi-110016 01/12/2007 1 Introduction Classification OUTLINE K-fold cross Validation Feature
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational
More informationThe Expected Performance Curve: a New Assessment Measure for Person Authentication
R E S E A R C H R E P O R T I D I A P The Expected Performance Curve: a New Assessment Measure for Person Authentication Samy Bengio 1 Johnny Mariéthoz 2 IDIAP RR 03-84 March 10, 2004 submitted for publication
More informationThe Expected Performance Curve: a New Assessment Measure for Person Authentication
The Expected Performance Curve: a New Assessment Measure for Person Authentication Samy Bengio Johnny Mariéthoz IDIAP CP 592, rue du Simplon4 192 Martigny, Switzerland {bengio,marietho}@idiap.ch Abstract
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationImproving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets
Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets Md Nasim Adnan and Md Zahidul Islam Centre for Research in Complex Systems (CRiCS)
More informationA System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation
A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation K. Roy, U. Pal and B. B. Chaudhuri CVPR Unit; Indian Statistical Institute, Kolkata-108; India umapada@isical.ac.in
More informationPossibilities of Voting
Possibilities of Voting MATH 100, Survey of Mathematical Ideas J. Robert Buchanan Department of Mathematics Summer 2018 Introduction When choosing between just two alternatives, the results of voting are
More informationColor Space Projection, Feature Fusion and Concurrent Neural Modules for Biometric Image Recognition
Proceedings of the 5th WSEAS Int. Conf. on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Venice, Italy, November 20-22, 2006 286 Color Space Projection, Fusion and Concurrent Neural
More informationRecognition of Unconstrained Malayalam Handwritten Numeral
Recognition of Unconstrained Malayalam Handwritten Numeral U. Pal, S. Kundu, Y. Ali, H. Islam and N. Tripathy C VPR Unit, Indian Statistical Institute, Kolkata-108, India Email: umapada@isical.ac.in Abstract
More informationAn Empirical Comparison of Spectral Learning Methods for Classification
An Empirical Comparison of Spectral Learning Methods for Classification Adam Drake and Dan Ventura Computer Science Department Brigham Young University, Provo, UT 84602 USA Email: adam drake1@yahoo.com,
More informationOptimized Implementation of Logic Functions
June 25, 22 9:7 vra235_ch4 Sheet number Page number 49 black chapter 4 Optimized Implementation of Logic Functions 4. Nc3xe4, Nb8 d7 49 June 25, 22 9:7 vra235_ch4 Sheet number 2 Page number 5 black 5 CHAPTER
More informationAdaptive Building of Decision Trees by Reinforcement Learning
Proceedings of the 7th WSEAS International Conference on Applied Informatics and Communications, Athens, Greece, August 24-26, 2007 34 Adaptive Building of Decision Trees by Reinforcement Learning MIRCEA
More informationDI TRANSFORM. The regressive analyses. identify relationships
July 2, 2015 DI TRANSFORM MVstats TM Algorithm Overview Summary The DI Transform Multivariate Statistics (MVstats TM ) package includes five algorithm options that operate on most types of geologic, geophysical,
More informationModelStructureSelection&TrainingAlgorithmsfor an HMMGesture Recognition System
ModelStructureSelection&TrainingAlgorithmsfor an HMMGesture Recognition System Nianjun Liu, Brian C. Lovell, Peter J. Kootsookos, and Richard I.A. Davis Intelligent Real-Time Imaging and Sensing (IRIS)
More informationEstablishing Virtual Private Network Bandwidth Requirement at the University of Wisconsin Foundation
Establishing Virtual Private Network Bandwidth Requirement at the University of Wisconsin Foundation by Joe Madden In conjunction with ECE 39 Introduction to Artificial Neural Networks and Fuzzy Systems
More informationAn Empirical Study of Hoeffding Racing for Model Selection in k-nearest Neighbor Classification
An Empirical Study of Hoeffding Racing for Model Selection in k-nearest Neighbor Classification Flora Yu-Hui Yeh and Marcus Gallagher School of Information Technology and Electrical Engineering University
More informationStatic Gesture Recognition with Restricted Boltzmann Machines
Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract
More informationarxiv: v5 [cs.ds] 4 May 2014
An Analysis of Rank Aggregation Algorithms Gattaca Lv No Institute Given arxiv:1402.5259v5 [cs.ds] 4 May 2014 Abstract. Rank aggregation is an essential approach for aggregating the preferences of multiple
More informationGene Clustering & Classification
BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering
More informationDESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES
EXPERIMENTAL WORK PART I CHAPTER 6 DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES The evaluation of models built using statistical in conjunction with various feature subset
More informationEffects of Image Quality on Target Recognition
Leslie M. Novak Scientific Systems Company, Inc. 500 West Cummings Park, Suite 3000 Woburn, MA 01801 USA E-mail lnovak@ssci.com novakl@charter.net ABSTRACT Target recognition systems using Synthetic Aperture
More informationUsing Principles to Support Usability in Interactive Systems
Using Principles to Support Usability in Interactive Systems Mauricio Lopez Dept. of Computer Science and Engineering York University Toronto, Ontario, Canada M3J1V6 malchevic@msn.com ABSTRACT This paper
More informationA modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems
A modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda
More informationDan Ciresan Politehnica University of Timisoara Computer Department Timisoara, Romania Abstract. 1.
Avoiding Segmentation in Multi-digit Numeral String Recognition by Combining Single and Two-digit Classifiers Trained without Negative Examples (draft - camera ready on 09/01/2008) Dan Ciresan Politehnica
More informationVARIANTS OF THE SIMPLEX METHOD
C H A P T E R 6 VARIANTS OF THE SIMPLEX METHOD By a variant of the Simplex Method (in this chapter) we mean an algorithm consisting of a sequence of pivot steps in the primal system using alternative rules
More informationFor searching and sorting algorithms, this is particularly dependent on the number of data elements.
Looking up a phone number, accessing a website and checking the definition of a word in a dictionary all involve searching large amounts of data. Searching algorithms all accomplish the same goal finding
More informationRandom projection for non-gaussian mixture models
Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,
More informationPose estimation using a variety of techniques
Pose estimation using a variety of techniques Keegan Go Stanford University keegango@stanford.edu Abstract Vision is an integral part robotic systems a component that is needed for robots to interact robustly
More informationA Statistical approach to line segmentation in handwritten documents
A Statistical approach to line segmentation in handwritten documents Manivannan Arivazhagan, Harish Srinivasan and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) University
More informationUnsupervised learning in Vision
Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual
More informationHandling Missing Attribute Values in Preterm Birth Data Sets
Handling Missing Attribute Values in Preterm Birth Data Sets Jerzy W. Grzymala-Busse 1, Linda K. Goodwin 2, Witold J. Grzymala-Busse 3, and Xinqun Zheng 4 1 Department of Electrical Engineering and Computer
More informationClustering and Visualisation of Data
Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some
More informationCollective Choice with Uncertain Domain Moldels Whitman Richards
Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2005-054 AIM-2005-024 August 16, 2005 Collective Choice with Uncertain Domain Moldels Whitman Richards massachusetts
More informationOn Error Correlation and Accuracy of Nearest Neighbor Ensemble Classifiers
On Error Correlation and Accuracy of Nearest Neighbor Ensemble Classifiers Carlotta Domeniconi and Bojun Yan Information and Software Engineering Department George Mason University carlotta@ise.gmu.edu
More informationLecture 3: Linear Classification
Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.
More informationIterative Voting Rules
Noname manuscript No. (will be inserted by the editor) Iterative Voting Rules Meir Kalech 1, Sarit Kraus 2, Gal A. Kaminka 2, Claudia V. Goldman 3 1 Information Systems Engineering, Ben-Gurion University,
More informationVoting Methods and Colluding Voters
. Voting Methods and Colluding Voters Christopher Hanusa Binghamton University December 3, 2007 Outline Voting Methods Plurality/Majority and refinements Ranked Pairs Borda Count Let s vote! Mathematics
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Machine Learning: Perceptron Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer and Dan Klein. 1 Generative vs. Discriminative Generative classifiers:
More informationExploring Econometric Model Selection Using Sensitivity Analysis
Exploring Econometric Model Selection Using Sensitivity Analysis William Becker Paolo Paruolo Andrea Saltelli Nice, 2 nd July 2013 Outline What is the problem we are addressing? Past approaches Hoover
More informationExploring Similarity Measures for Biometric Databases
Exploring Similarity Measures for Biometric Databases Praveer Mansukhani, Venu Govindaraju Center for Unified Biometrics and Sensors (CUBS) University at Buffalo {pdm5, govind}@buffalo.edu Abstract. Currently
More informationOCR For Handwritten Marathi Script
International Journal of Scientific & Engineering Research Volume 3, Issue 8, August-2012 1 OCR For Handwritten Marathi Script Mrs.Vinaya. S. Tapkir 1, Mrs.Sushma.D.Shelke 2 1 Maharashtra Academy Of Engineering,
More informationSome Applications of Graph Bandwidth to Constraint Satisfaction Problems
Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept
More informationMore Learning. Ensembles Bayes Rule Neural Nets K-means Clustering EM Clustering WEKA
More Learning Ensembles Bayes Rule Neural Nets K-means Clustering EM Clustering WEKA 1 Ensembles An ensemble is a set of classifiers whose combined results give the final decision. test feature vector
More informationIndian Multi-Script Full Pin-code String Recognition for Postal Automation
2009 10th International Conference on Document Analysis and Recognition Indian Multi-Script Full Pin-code String Recognition for Postal Automation U. Pal 1, R. K. Roy 1, K. Roy 2 and F. Kimura 3 1 Computer
More informationPre-control and Some Simple Alternatives
Pre-control and Some Simple Alternatives Stefan H. Steiner Dept. of Statistics and Actuarial Sciences University of Waterloo Waterloo, N2L 3G1 Canada Pre-control, also called Stoplight control, is a quality
More informationEquation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.
Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way
More informationFusion in Multibiometric Identification Systems: What about the Missing Data?
Fusion in Multibiometric Identification Systems: What about the Missing Data? Karthik Nandakumar 1, Anil K. Jain 2 and Arun Ross 3 1 Institute for Infocomm Research, A*STAR, Fusionopolis, Singapore, knandakumar@i2r.a-star.edu.sg
More informationVisual object classification by sparse convolutional neural networks
Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.
More informationGesture Recognition using Neural Networks
Gesture Recognition using Neural Networks Jeremy Smith Department of Computer Science George Mason University Fairfax, VA Email: jsmitq@masonlive.gmu.edu ABSTRACT A gesture recognition method for body
More informationFace Detection. Gary Chern, Paul Gurney, and Jared Starman
Face Detection Gary Chern, Paul Gurney, and Jared Starman. Introduction Automatic face detection is a complex problem in image processing. Many methods exist to solve this problem such as template matching,
More informationStat 602X Exam 2 Spring 2011
Stat 60X Exam Spring 0 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed . Below is a small p classification training set (for classes) displayed in
More informationCombining Biometric Scores in Identification Systems
1 Combining Biometric Scores in Identification Systems Sergey Tulyakov and Venu Govindaraju, Fellow, IEEE Both authors are with the State University of New York at Buffalo. 2 Abstract Combination approaches
More informationSlant normalization of handwritten numeral strings
Slant normalization of handwritten numeral strings Alceu de S. Britto Jr 1,4, Robert Sabourin 2, Edouard Lethelier 1, Flávio Bortolozzi 1, Ching Y. Suen 3 adesouza, sabourin@livia.etsmtl.ca suen@cenparmi.concordia.ca
More informationABJAD: AN OFF-LINE ARABIC HANDWRITTEN RECOGNITION SYSTEM
ABJAD: AN OFF-LINE ARABIC HANDWRITTEN RECOGNITION SYSTEM RAMZI AHMED HARATY and HICHAM EL-ZABADANI Lebanese American University P.O. Box 13-5053 Chouran Beirut, Lebanon 1102 2801 Phone: 961 1 867621 ext.
More information8. Clustering: Pattern Classification by Distance Functions
CEE 6: Digital Image Processing Topic : Clustering/Unsupervised Classification - W. Philpot, Cornell University, January 0. Clustering: Pattern Classification by Distance Functions The premise in clustering
More informationComputer Vision Group Prof. Daniel Cremers. 8. Boosting and Bagging
Prof. Daniel Cremers 8. Boosting and Bagging Repetition: Regression We start with a set of basis functions (x) =( 0 (x), 1(x),..., M 1(x)) x 2 í d The goal is to fit a model into the data y(x, w) =w T
More informationNeural Nets. CSCI 5582, Fall 2007
Neural Nets CSCI 5582, Fall 2007 Assignments For this week: Chapter 20, section 5 Problem Set 3 is due a week from today Neural Networks: Some First Concepts Each neural element is loosely based on the
More informationDETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT
DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT 1 NUR HALILAH BINTI ISMAIL, 2 SOONG-DER CHEN 1, 2 Department of Graphics and Multimedia, College of Information
More informationCHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES
CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving
More informationA Model of Dynamic Visual Attention for Object Tracking in Natural Image Sequences
Published in Computational Methods in Neural Modeling. (In: Lecture Notes in Computer Science) 2686, vol. 1, 702-709, 2003 which should be used for any reference to this work 1 A Model of Dynamic Visual
More informationRadial Basis Function Neural Network Classifier
Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University
More informationOptical Character Recognition (OCR) for Printed Devnagari Script Using Artificial Neural Network
International Journal of Computer Science & Communication Vol. 1, No. 1, January-June 2010, pp. 91-95 Optical Character Recognition (OCR) for Printed Devnagari Script Using Artificial Neural Network Raghuraj
More informationStatistical Good Practice Guidelines. 1. Introduction. Contents. SSC home Using Excel for Statistics - Tips and Warnings
Statistical Good Practice Guidelines SSC home Using Excel for Statistics - Tips and Warnings On-line version 2 - March 2001 This is one in a series of guides for research and support staff involved in
More informationDynamic Selection of Ensembles of Classifiers Using Contextual Information
Dynamic Selection of Ensembles of Classifiers Using Contextual Information Paulo R. Cavalin 1, Robert Sabourin 1, and Ching Y. Suen 2 1 École de Technologie Supérieure, 1100 Notre-dame ouest, Montreal(QC),
More informationAdaptive Feature Extraction with Haar-like Features for Visual Tracking
Adaptive Feature Extraction with Haar-like Features for Visual Tracking Seunghoon Park Adviser : Bohyung Han Pohang University of Science and Technology Department of Computer Science and Engineering pclove1@postech.ac.kr
More informationTopic 7 Machine learning
CSE 103: Probability and statistics Winter 2010 Topic 7 Machine learning 7.1 Nearest neighbor classification 7.1.1 Digit recognition Countless pieces of mail pass through the postal service daily. A key
More informationA novel firing rule for training Kohonen selforganising
A novel firing rule for training Kohonen selforganising maps D. T. Pham & A. B. Chan Manufacturing Engineering Centre, School of Engineering, University of Wales Cardiff, P.O. Box 688, Queen's Buildings,
More informationCPSC 532L Project Development and Axiomatization of a Ranking System
CPSC 532L Project Development and Axiomatization of a Ranking System Catherine Gamroth cgamroth@cs.ubc.ca Hammad Ali hammada@cs.ubc.ca April 22, 2009 Abstract Ranking systems are central to many internet
More informationTraining Digital Circuits with Hamming Clustering
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 47, NO. 4, APRIL 2000 513 Training Digital Circuits with Hamming Clustering Marco Muselli, Member, IEEE, and Diego
More informationChapter 8 Introduction to Statistical Quality Control, 6 th Edition by Douglas C. Montgomery. Copyright (c) 2009 John Wiley & Sons, Inc.
Chapter 8 Introduction to Statistical Quality Control, 6 th Edition by Douglas C. Montgomery. 1 Chapter 8 Introduction to Statistical Quality Control, 6 th Edition by Douglas C. Montgomery. 2 Learning
More informationAdaptive Metric Nearest Neighbor Classification
Adaptive Metric Nearest Neighbor Classification Carlotta Domeniconi Jing Peng Dimitrios Gunopulos Computer Science Department Computer Science Department Computer Science Department University of California
More informationInvariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction
Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Stefan Müller, Gerhard Rigoll, Andreas Kosmala and Denis Mazurenok Department of Computer Science, Faculty of
More informationClassification of Printed Chinese Characters by Using Neural Network
Classification of Printed Chinese Characters by Using Neural Network ATTAULLAH KHAWAJA Ph.D. Student, Department of Electronics engineering, Beijing Institute of Technology, 100081 Beijing, P.R.CHINA ABDUL
More informationEfficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1225 Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms S. Sathiya Keerthi Abstract This paper
More informationUser Signature Identification and Image Pixel Pattern Verification
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 7 (2017), pp. 3193-3202 Research India Publications http://www.ripublication.com User Signature Identification and Image
More informationIntroduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16
600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 11.1 Introduction Dynamic programming can be very confusing until you ve used it a
More informationPractical Algorithms for Computing STV and Other Multi-Round Voting Rules
Practical Algorithms for Computing STV and Other Multi-Round Voting Rules Chunheng Jiang jiangc4@rpi.edu Lirong Xia xial@cs.rpi.edu Sujoy Sikdar sikdas@rpi.edu Zhibing Zhao zhaoz6@rpi.edu Hejun Wang wangj38@rpi.edu
More informationMouse Pointer Tracking with Eyes
Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating
More informationII. WORKING OF PROJECT
Handwritten character Recognition and detection using histogram technique Tanmay Bahadure, Pranay Wekhande, Manish Gaur, Shubham Raikwar, Yogendra Gupta ABSTRACT : Cursive handwriting recognition is a
More informationFrom CrossFire to Reaxys: 10 Reasons to switch. Elsevier Information Systems GmbH
From CrossFire to Reaxys: 10 Reasons to switch Elsevier Information Systems GmbH 10 reasons to switch from CrossFire to Reaxys CrossFire Reaxys 1 Software/Interface Client Server Application Intuitive,
More informationAlgorithms, Games, and Networks February 21, Lecture 12
Algorithms, Games, and Networks February, 03 Lecturer: Ariel Procaccia Lecture Scribe: Sercan Yıldız Overview In this lecture, we introduce the axiomatic approach to social choice theory. In particular,
More information