Pattern classification of cotton yarn neps

Similar documents
Support Vector Machines

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Machine Learning 9. week

Classifying Acoustic Transient Signals Using Artificial Intelligence

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Edge Detection in Noisy Images Using the Support Vector Machines

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

The Research of Support Vector Machine in Agricultural Data Classification

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Classification / Regression Support Vector Machines

A Binarization Algorithm specialized on Document Images and Photos

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Topic 6: Clustering

Classifier Selection Based on Data Complexity Measures *

CS 534: Computer Vision Model Fitting

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Lecture 5: Multilayer Perceptrons

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

CLASSIFICATION OF ULTRASONIC SIGNALS

Announcements. Supervised Learning

Solving two-person zero-sum game by Matlab

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Machine Learning: Algorithms and Applications

Support Vector Machines

Collaboratively Regularized Nearest Points for Set Based Recognition

Detection of an Object by using Principal Component Analysis

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

Unsupervised Learning

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Artificial Intelligence (AI) methods are concerned with. Artificial Intelligence Techniques for Steam Generator Modelling

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

An Optimal Algorithm for Prufer Codes *

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Module Management Tool in Software Development Organizations

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

A Robust Method for Estimating the Fundamental Matrix

Hermite Splines in Lie Groups as Products of Geodesics

SRBIR: Semantic Region Based Image Retrieval by Extracting the Dominant Region and Semantic Learning

Three supervised learning methods on pen digits character recognition dataset

Problem Set 3 Solutions

User Authentication Based On Behavioral Mouse Dynamics Biometrics

Face Recognition Based on SVM and 2DPCA

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION...

Fast Feature Value Searching for Face Detection

SVM-based Learning for Multiple Model Estimation

Data Mining: Model Evaluation

Cluster Analysis of Electrical Behavior

S1 Note. Basis functions.

An Improved Neural Network Algorithm for Classifying the Transmission Line Faults

An Image Fusion Approach Based on Segmentation Region

Hybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance

Face Recognition Method Based on Within-class Clustering SVM

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

The Study of Remote Sensing Image Classification Based on Support Vector Machine

Fuzzy Logic Based RS Image Classification Using Maximum Likelihood and Mahalanobis Distance Classifiers

Hierarchical clustering for gene expression data analysis

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction

Training of Kernel Fuzzy Classifiers by Dynamic Cluster Generation

Smoothing Spline ANOVA for variable screening

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

Unsupervised Learning and Clustering

Mathematics 256 a course in differential equations for engineering students

Spam Filtering Based on Support Vector Machines with Taguchi Method for Parameter Selection

Color Image Segmentation Using Multispectral Random Field Texture Model & Color Content Features

Feature Selection for Target Detection in SAR Images

Local Quaternary Patterns and Feature Local Quaternary Patterns

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

An AAM-based Face Shape Classification Method Used for Facial Expression Recognition

MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS XUNYU PAN

Novel Fuzzy logic Based Edge Detection Technique

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

Analysis of Continuous Beams in General

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

Using Neural Networks and Support Vector Machines in Data Mining

GSLM Operations Research II Fall 13/14

Unsupervised Learning and Clustering

Detection of hand grasping an object from complex background based on machine learning co-occurrence of local image feature

Vol. 5, No. 3 March 2014 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

Lecture 4: Principal components

A New Approach For the Ranking of Fuzzy Sets With Different Heights

Transcription:

Indan Journal of Fbre & extle Research Vol. 41, September 016, pp. 70-77 Pattern classfcaton of cotton yarn neps Abul Hasnat, Anndya Ghosh a, Azzul Hoque b & Santanu Halder c Government College of Engneerng and extle echnology, Berhampore 74 101, Inda Receved 1 October 014; revsed receved and accepted 3 December 014 In ths study, two types of cotton yarn neps, vz. seed coat and fbrous neps, have been classfed by means of two standard classfers, namely support vector machne and probablstc neural network usng the features extracted from the mages of neps. At frst, the regon of nterest s located n the captured mages usng k-means clusterng algorthm, from whch sx features are extracted. hese extracted features are used as dataset (both tranng and testng) for classfers. A K-fold cross valdaton technque has been appled to assess the performance of the two classfers. he results show that the neps classfcaton accomplshed by means of mage recognton through these classfers acheves nearly 96-97% accuracy for the test data set. Expermental results show that the requred tme for tranng probablstc neural network s sgnfcantly less as compared to that of support vector machne. Keywords: Cotton yarn, Fbrous nep, Pattern classfcaton, Probablstc neural network, Seed coat nep, Support vector machne 1 Introducton Now-a-days, fabrc defect dentfcaton usng mage processng 1-3 s becomng an mportant area n textle ndustry, where t s possble to automate the nspecton of fabrc defects durng weavng 4-6. Nep s one type of yarn defects whch s defned as a knot (or cluster) of hghly entangled cotton fbres consstng ether entrely of fbres or a seed-coat fragment attached wth fbres. Neps are created when fbres become tangled n the process of cotton harvestng, gnnng and spnnng. Generaton of neps causes a very serous problem because the neps that reman n yarns often persst nto the fnal fabrcs and adversely affects ts qualty. For almost all types of yarn or fabrc, neps are consdered as serous defects. A sgnfcant occurrence of neps level n the yarn often causes a fabrc to be down-graded or rejected. By and large, cotton yarn neps may be classfed nto two types, namely seed coat and fbrous neps. So far, no hands-on state-of-the-art technology has been adopted for the dentfcaton of these two types of neps n cotton yarns. An onlne system of dentfyng the seed coat and fbrous neps would be helpful to the spnners to track back ther causes of generaton n the a Correspondng author. E-mal: anndya.textle@gmal.com b Present address: Kalyan Unversty, Kalyan 741 45, Inda c Present address: Government College of Engneerng and Leather echnology, Kolkata 700 098, Inda downstream processes, whch n turn ncreases the possblty of mprovng the process control system of a spnnng ndustry. he present study endeavors to recognze fbrous and seed-coat neps n cotton yarns by means of two standard pattern recognton systems or classfers, namely support vector machne (SVM) 7-9 and probablstc neural network (PNN) 10-11 usng the features extracted from gray level mages of neps. he proposed work may be dvded nto four steps, namely capturng the mages of cotton yarn neps usng LEICA mcroscope, detecton of regon of nterest from the captured mage, feature extracton from the regon of nterest, and fnally classfcaton of the features by SVM and PNN. Materals and Methods.1 Capturng Images of Yarn Neps 0 s Ne carded yarns spun from J-34 cotton fbres of 4 mcronare wth 5 mm span length (.5%) were collected from the spnnng ndustry. he mages of yarn neps were captured usng a LEICA camera (Model EZ-4D) wth a magnfcaton of 40. Samples were llumnated by three halogen lghts postoned approxmately 0 cm above drectly and to the rght and left of the sample to supply llumnaton n dagonal drectons of 45. Altogether 70 mages were captured for the expermentaton. All the captured mages of sze 048 1536 3 pxels were

HASNA et al.: PAERN CLASSIFICAION OF COON YARN NEPS 71 converted nto dgtzed gray mages wth grey level ntenstes rangng from 0 to 55 and stored as twodmensonal gray scale matrxes.. Extracton of Regon of Interest Each dgtzed gray mage was enhanced by applyng a medan flter to elmnate undesred nose. From the fltered mage I ( a, b) of sze p q (where a = 1,,...,p; b = 1,,...,q), the regon of nterest was extracted by applyng k-means clusterng algorthm. o locate the poston of the nep n the mage, the average pxel ntensty value of th row, A and the row h wth mnmum average pxel ntensty were calculated usng the followng equatons: A A h q b1 I(, b) q... (1) MIN{ A}... () where 1,.., p. In ths way, h was located and as a preprocessng step the mage was cropped up to 30% of heght on both sdes of the located row h. It was observed from the expermental result that the regon of nterest les wthn the 60% of cropped mage. In the next step, usng pxel ntensty as nput, k-means clusterng algorthm was appled to remove the background pxels from the captured mages. he k-means clusterng algorthm 1,13 s one of the smplest unsupervsed learnng algorthms that solve the well known clusterng problem. he procedure follows a smple and easy way to classfy a gven set of n vectors nto k (k s an nteger) number of clusters. he man dea s to defne k centrods G { g1, g,.. g,.., g k }, one for each cluster; takng each of vectors belongng to the gven set, assocatng t to the nearest centrod g, and then ncludng t n the cluster C. hus, the frst step s completed and an early k groupng s done. At ths pont, we need to recalculate k new centrods of the clusters, resultng from the prevous step. After havng these k new centrods, a new groupng has to be done for each vector of the set and once agan t s re-assocated to nearest new centrod and correspondng clusters. hs step s repeated and as a result the k centrods change ther locaton step by step untl no more changes occur. hus, the algorthm converges and gves the fnal clusters. In ths study, clusterng was done by usng the pxels ntensty values and all pxels whch belong to the background cluster were set to a value of 55. In the next step, from the clustered mage the average pxel ntensty of j th column, A j and the column v wth mnmum average pxel ntensty were calculated usng the followng equatons: A p a j 1 I( a, j) p... (3) A MIN A }... (4) v { j where j 1,... q. After locatng the column v n the clustered mage, 15% wdth of the mage from both sdes of ths column was cropped. It was observed from the expermental result that the regon of nterest les wthn ths 30% of the clustered mage. In ths cropped mage, only the pxels havng ntenstes of < 55 were consdered..3 Feature Extracton From the extracted regon of nterest of each mage, an array Arr () where 1,,..., N, was constructed contanng pxels ntensty values of < 55. Sx smple features vz. mnmum, maxmum, average, standard devaton (SD), medan and mode were determned from each array usng the followng equatons: Mmmun MIN N1 { Arr ( )}... (5) Maxmum MAX N1 { Arr ( )}... (6) Average SD N 1 N 1 Arr() N ( Arr( ) Average ) N... (7)... (8) he medan s the mddle value of the sorted array (f array sze s even the average of the mddle two values s consdered). Mode s the element havng maxmum frequency n the array. Fg. 1 shows the regon of nterest of an mage and the features extracted from t. hese sx features were extracted from each of the 70 captured mages.

7 INDIAN J. FIBRE EX. RES., SEPEMBER 016.4 Pattern Classfers (SVM and PNN) I { x}; {1,,3,4,.. n If we consder a set } of n nput data, x f, f,.., f } of m dmensonal { 1 m space, a set of classes W { w1, w,... wc} where c n, then a classfcaton problem s defned as mappng F : I W where each X n I s assgned to one of the classes of W. hs mappng functon s called decson functon. he basc problem n classfcaton s to fnd c decson functons d x), d ( x),... dc ( ) wth the property that, f a 1( x pattern x belongs to class, then d x d j x, j 1,,... c; j, x ; d s some smlarty measure between x and class, such as dstance or probablty concept. Classfcaton s a supervsed learnng where the classfer s traned usng tranng dataset consstng features as nput and expected class of each of the data ponts as output. Based on ths tranng some nternal parameters of the classfer are adjusted accordngly. For testng, a new set of data ponts are fed nto the classfer whch assgns a class to each of the testng data ponts. here are many classfers lke k-nearest Neghbor, SVM, artfcal neural network (ANN), PNN, etc. In ths work, SVM and PNN classfers are selected for pattern classfcaton due to the fact that the complexty of SVM s ndfferent to the dmenson of nput data and the tranng tme of PNN s nsgnfcant because there s no weght updatng process nvolved n ts tranng. SVM s a mathematcal model derved from the statstcal learnng theory developed by Vapnk 7. SVM s supervsed learnng model wth assocated learnng algorthms that analyze multdmensonal data and recognze the patterns. As t s a supervsed learnng model, for a set of tranng data, each vector assgned to one of two categores, ts tranng algorthm bulds a model that assgns new examples nto one category or the other. SVM model s a representaton of the examples as ponts mapped n the hyper space, the examples of the separate categores are dvded by hyper plane wth a clear gap that s as wde as possble. After tranng, new examples are mapped nto that same space and then classfed n to one of the categores, based on whch sde of the gap they fall on. In addton to performng lnear classfcaton, SVM can effcently perform non-lnear classfcaton usng the kernel functon, mplctly mappng ther nputs nto hgh-dmensonal feature spaces. Consder the problem of separatng the set of data ponts {( x, y )}: 1,,.. n where for y 1, w x b 0 and for y 1, w x b 0 (Fg. ). Wth a scale transformaton of both w and b, these two equatons are equvalent to w x b 1, when y 1 and w x b 1, when y 1. hese two equatons can be combned as y ( w x b) 1. herefore, the correspondng margns can be defned by w x b 1 w x b 1. he margn wdth, M s gven by and M... (9) w In case of maxmal margn classfer, the margn wdth M s maxmzed and the problem could be formulated as Maxmze, w Fg. 1 Regon of nterest of an mage and features extracted from t [mnmum 6, maxmum 171, average 99.33, standard devaton 49.75, medan 11, mode 11] Fg. Support vector machne as classfer

HASNA et al.: PAERN CLASSIFICAION OF COON YARN NEPS 73 Subject to y ( w x b) 1. Alternatvely, the optmzaton problem becomes 1 Mnmze f w subject to y ( w x b) 1. he man problem wth the maxmal margn classfer s that t always produces perfectly consstent hypothess, whch s a hypothess wth no tranng error. In real data, where nose can always be present, ths can result n a brttle estmator. hese problems can be overcome by usng the soft-margn optmzaton, where we need to ntroduce slack varables ( ) to allow the margn constrants to be volated, as shown below: y ( w x b) 1, 0. hus, the soft-margn optmzaton problem becomes l 1 Mnmze f w... (10) subject to y ( w x 1 b) 1 where s a pre-specfed value. On the other hand, PNN s a feed forward neural network, whch s derved from the Bayesan network and a statstcal algorthm called Kernel Fsher dscrmnant analyss. It was ntroduced by Specht 10 n early 1990s. he PNN archtecture s feed forward n nature, but dffers n the way that learnng occurs n ANN. PNN s supervsed learnng algorthm but ncludes no weghts n ts hdden layer. Instead each hdden node represents an example vector, wth the example actng as the weghts to that hdden node. hese are not adjusted at all. PNN s manly used n classfcaton problems. A schematc archtecture of PNN s llustrated n Fg. 3. When an nput s presented, the hdden layer computes the dstance from the nput vector to the tranng nput vectors. hs produces a vector where ts elements ndcate how close the nput s to the tranng nputs. he summaton layer sums the contrbuton for each class of nputs and produces ts net output as a vector of probabltes. Fnally, a compete transfer functon on the output of the summaton layer pcks the maxmum of these probabltes and ndcates that partcular class as the output. he functon of each layer of PNN s dscussed below. Fg. 3 A schematc representaton of the PNN archtecture () Input Layer he nput layer contans m nodes (n ths work m = 6) for each of the nput features of vector, x { f1, f,.., f6}. hese are fan-out nodes that branch at each feature nput node to all nodes n the hdden layer so that each hdden node receves the complete nput feature vectors. () Hdden Layer/Pattern Layer hs layer contans one neuron for each vector n the tranng data set. It stores the values of the predctor varables for the vector along wth the target value. A hdden neuron computes the Eucldean dstance of the test case from the neuron s center ( P) x pont [that s the stored vector, ] and then maps t to radal bass functon (RBF) as gven n the followng equaton: ( P ) 1 [ xx / ] f( x) exp... (11) m ( ) he σ values can be taken to be one-half the average dstance between the feature vectors n the same group or at each exemplar t can be one-half the dstance from the exemplar to ts nearest other exemplar vector. () Summaton Layer he summaton layer neurons compute the maxmum lkelhood of pattern, x beng classfed nto class c k ; k 1, by summarzng and averagng the output of all neurons that belong to the same class. he actual target category of each tranng case s stored wth each hdden neuron; all the weghted values comng out from hdden neurons (of specfc class) are fed only to the summaton neuron that

74 INDIAN J. FIBRE EX. RES., SEPEMBER 016 corresponds to the hdden neuron s category. he k th summaton node sums up the values receved from the k th group of hdden nodes usng the followng equaton: P ( P) 1 1 [ xx / ] pk ( x) [ ] exp m... (1) ( ) P P1 where P s the number of hdden nodes for a partcular class. (v) Output Layer he decson layer/output unt classfes the pattern, x n accordance wth the Bayes s decson rule based on the output of all the summaton layer neurons usng the followng equaton: cˆ( k) argmax{ p ( x)}... (13) k1 k hus, the PNN recognzes the test vector n k th class..5 Cross Valdaton A K-fold cross valdaton technque 14 has been appled to assess the accuracy of both the classfers. In K-fold cross valdaton, the ntal dataset s randomly parttoned nto K mutually exclusve subsets or folds D 1, D,, D K, each of approxmately equal sze. he tranng and testng are performed K tmes. In the th teraton, partton D s reserved as the test set and the remanng partton are collectvely used to tran the model. In ths method, each data pont s used for the same number of tmes for tranng and once for testng. herefore, the valdaton of the model becomes more accurate and unbased. he K-fold cross valdaton method s schematcally depcted n Fg. 4, where 1,,...,K represent the fold correspondng to testng dataset. Fg. 4 Schematc representaton of K-fold cross-valdaton 3 Results and Dscusson Altogether 70 captured mages of cotton yarn neps have been consdered for expermentaton and regons of nterest are extracted from all the mages usng the method as descrbed before. In able 1, second and thrd column show some captured mages and ther correspondng regons of nterest respectvely. he last column shows the extracted features from the regon of nterest of the captured mages. able depcts only a subset of 60 data ponts choosng 30 from each category of neps, n whch f, 1,.. 6 are the extracted features correspondng to mnmum, maxmum, average, standard devaton, medan and mode respectvely. he last column of able shows assocated type of the respectve nep (1 for fbrous neps and for seed coat nep). Each of the extracted features s normalzed n the range [0 1]. he fnal dataset conssts of sx normalzed features representng each of the 70 mages as nputs and ther respectve assocated class as output. A 10-fold cross valdaton technque has been appled for cross valdaton for whch both PNN and SVM classfers are traned wth 9 of the folds and tested on the fold left out. herefore, tranng and testng are done for 10 tmes for both the classfers. he expected generalzaton accuraces referrng to tranng as well as testng are estmated as average accuracy standard devaton of 10 cycles. In case of PNN, the value of s tuned to be 0.07 by means of tral and error. For SVM, the value of s tuned to be 0 and radal bass functon s used as a kernel functon wth a kernel wdth equals to. he expected generalzaton accuraces of tranng and testng for PNN classfer are found to be 99.4 ± 0.39 and 95.9 ± 3.68 respectvely, whereas the same for SVM are found to be 97.1 ± 0.47 and 97.03 ± 3.8 respectvely. For both the classfers, tranng accuracy s found to be better than the testng accuracy because the latter s done on the unseen data. As far as the accuracy of testng dataset s concerned, SVM gves hgher accuracy n classfcaton than the PNN. hs may be ascrbed to the better generalzaton capacty and ablty of handlng nosy data by SVM. On the other hand, PNN gves hgher accuracy on the tranng dataset because t smply stores a tranng vector n the respectve class by creatng a new node n the hdden layer and there s no weght update or convergence process

HASNA et al.: PAERN CLASSIFICAION OF COON YARN NEPS 75 able 1 Extracted features from the regon of nterest of few captured mages Captured mage Regon of nterest Extracted features Mnmum =56 Maxmum =18 Average =98.89 SD =14.96 Medan =99 Mode = 98 Mnmum =35 Maxmum =113 Average =84.63 SD =14.4 Medan =83 Mode = 76 Mnmum = 43 Maxmum =13 Average = 95.93 SD =0.67 Medan = 95 Mode = 83 Mnmum = 8 Maxmum =11 Average = 77.37 SD =33.73 Medan = 88 Mode = 114 Mnmum = 7 Maxmum = 1 Average = 69.77 SD =35.9 Medan = 76 Mode = 3 Mnmum = 4 Maxmum =119 Average =70.14 SD =35.58 Medan = 84 Mode = 85 Mnmum = 3 Maxmum = 108 Average = 48.08 SD =3.1 Medan = 44 Mode = 11

76 INDIAN J. FIBRE EX. RES., SEPEMBER 016 able A sample of extracted features from mages SL f 1 f f 3 f 4 f 5 f 6 ype 1 3 16 1.66 7.084 18 147 1 5 18 9.05 0.81 91 85 1 3 49 135 98.735 1.376 99 84 1 4 35 137 98.83 3.564 97 80 1 5 45 144 110.8 1.7 113 136 1 6 33 10 78.141 1.455 77 56 1 7 49 139 103.79 1.001 105 105 1 8 49 149 96.736 4.19 9 8 1 9 31 155 104.99 3.49 110 18 1 10 4 1 80.799 4.18 8 8 1 11 63 149 115.55 1.114 116 145 1 1 48 16 93.466 0.044 93 13 1 13 68 147 113.14 18.717 111 95 1 14 55 141 106.45 0.096 108 105 1 15 6 136 9.899 7.59 97 104 1 16 59 139 101.46 0.98 98 80 1 17 71 148 1.09 18.58 15 148 1 18 36 149 93.434 8.676 9 95 1 19 6 133 106.19 17.5 107 19 1 0 65 131 103.74 16.093 103 93 1 1 44 17 95.38 17.74 95 94 1 6 135 84.869 5.565 81 57 1 3 36 14 87.81 0.007 88 81 1 4 37 19 93.147 0.94 93 85 1 5 56 13 104.73 18.39 107 111 1 6 50 17 96.76 16.487 97 97 1 7 56 115 90.57 13.801 91 94 1 8 44 109 8.544 16.771 83 106 1 9 6 11 74.13.31 77 87 1 30 37 131 91.148.537 9 84 1 31 4 10 44.43 9.66 44 13 3 11 110 64.054 6.89 68 69 33 6 141 78.81 39.671 83 1 34 1 130 89.74 31.06 96 16 35 14 14 8.045 34.16 9 14 36 6 151 90.44 4.67 99 13 37 9 108 60.654 6.744 6 1 38 7 11 69.777 6.078 70 63 39 5 11 65.718 33.68 7 10 40 5 111 61.797 8.61 65 63 41 19 97 66.558 17.74 68 6 4 14 103 78.35 19.754 8 80 43 8 97 66.314 0.57 71 74 44 9 10 78.859 33.141 91 107 45 7 131 8.998 33.561 91 97 46 11 118 70.37 9.834 7 97 47 10 135 93.51 31.94 100 111 48 16 104 78.171 19.409 81 98 (Contd.) able A sample of extracted features from mages Contd. SL f 1 f f 3 f 4 f 5 f 6 ype 49 13 101 73.364 0.197 77 77 50 3 110 76.59 0.585 78 87 51 13 134 84.898 33.47 91 108 5 8 14 8.077 4.07 8 79 53 17 104 75.06 19.853 79 79 54 10 103 74.555.067 81 93 55 7 11 64.983 5.947 67 61 56 6 115 85.578 18.376 87 89 57 1 13 86.737 6.951 9 1 58 5 119 88.46 0.87 89 86 59 6 110 7.184 4.005 7 64 60 11 115 8.147 1.908 84 84 f 1 mnmum, f maxmum, f 3 average, f 4 standard devaton, f 5 medan, f 6 mode, ype 1 fbrous nep, ype seed coat nep. nvolved. As a consequence, PNN does not requre sgnfcant amount of tme for tranng the system. hus, PNN works qute faster compared to SVM classfer. 4 Concluson he present study holds the key of effectve classfcaton of cotton yarn neps. It has been observed that both PNN and SVM are able to classfy fbrous and seed coat neps n the cotton yarn wth a reasonable degree of accuracy. SVM gves hgher accuracy n classfcaton on the testng dataset n comparson to PNN. he tme requred for the tranng of PNN s awfully small as the tranng process does not need any weght update process but t smply adds a node n the hdden layer for each of the tranng vector. he proposed method of classfyng seed coat and fbrous neps s a gude to mprove the process control system n a cotton spnnng ndustry. References 1 Pratt W K, Dgtal Image Processng, nd edn (Wley Inter Scence, New York), 1991. Umbaugh S E, Computer Vson and Image Processng (Prentce-Hall, Upper Saddle Rver, New Jersey), 1998. 3 Gonzalez R C & Woods R E, Dgtal Image Processng (Addson Wesley, New York), 1993. 4 sa S, Ln C H & Ln J J, ext Res J, 65 (1995) 13. 5 Kumar A & Pang G, IEEE ransact Industral Appl, 38 (00) 45. 6 Ghosh A, Guha, Bhar R B & Das S, Int J Cloth Sc echnol, 3 (011)14. 7 Vapnk V N, Statstcal Learnng heory (John Weley & Sons, New York), 1998.

HASNA et al.: PAERN CLASSIFICAION OF COON YARN NEPS 77 8 Crstann N & aylor J S, An Introducton to Support Vector Machne and other Kernel based Learnng Methods (Cambrdge Unversty Press, Cambrdge), 000. 9 Schlkopf B & Smola A J, Learnng wth Kernels: Support Vector Machnes, Regularzaton, Optmzaton and Beyond (MI Press, Massachusetts), 001. 10 Specht D F, IEEE ransact Neural Networks, 3 (1990) 109. 11 Ibrahem M M, Emary E & Ramakrshnan S, World Appl Sc J, 4 (008) 77. 1 Hartgan J A, Clusterng Algorthms (Wley, New York), 1975. 13 Gan G, Ma C & Wu J, Data clusterng: heory, Algorthms and Applcatons (SIAM Press, Phladelpha), 007. 14 Han J & Kamber M, Data Mnng Concepts and echnques, nd edn (Morgan Kaufmann Publshers, San Francsco), 006.