Research of Neural Network Classifier Based on FCM and PSO for Breast Cancer Classification

Similar documents
Learning the Kernel Parameters in Kernel Minimum Distance Classifier

The Research of Support Vector Machine in Agricultural Data Classification

Machine Learning 9. week

Clustering Algorithm Combining CPSO with K-Means Chunqin Gu 1, a, Qian Tao 2, b

Network Intrusion Detection Based on PSO-SVM

Classifier Selection Based on Data Complexity Measures *

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Support Vector Machines

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article. A selective ensemble classification method on microarray data

Meta-heuristics for Multidimensional Knapsack Problems

Optimal Design of Nonlinear Fuzzy Model by Means of Independent Fuzzy Scatter Partition

A Notable Swarm Approach to Evolve Neural Network for Classification in Data Mining

An Improved Particle Swarm Optimization for Feature Selection

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

Recommended Items Rating Prediction based on RBF Neural Network Optimized by PSO Algorithm

Chinese Word Segmentation based on the Improved Particle Swarm Optimization Neural Networks

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Optimizing SVR using Local Best PSO for Software Effort Estimation

Extraction of Fuzzy Rules from Trained Neural Network Using Evolutionary Algorithm *

Classification / Regression Support Vector Machines

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Image Feature Selection Based on Ant Colony Optimization

A Two-Stage Algorithm for Data Clustering

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Parameters Optimization of SVM Based on Improved FOA and Its Application in Fault Diagnosis

A Clustering Algorithm Solution to the Collaborative Filtering

Support Vector Machines

CHAPTER 2 PROPOSED IMPROVED PARTICLE SWARM OPTIMIZATION

Rule Discovery with Particle Swarm Optimization

Cluster Analysis of Electrical Behavior

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

An Evolvable Clustering Based Algorithm to Learn Distance Function for Supervised Environment

Face Recognition Based on SVM and 2DPCA

Using Neural Networks and Support Vector Machines in Data Mining

Classifier Swarms for Human Detection in Infrared Imagery

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

An Efficient Genetic Algorithm with Fuzzy c-means Clustering for Traveling Salesman Problem

Complexity Analysis of Problem-Dimension Using PSO

On Supporting Identification in a Hand-Based Biometric Framework

K-means Optimization Clustering Algorithm Based on Hybrid PSO/GA Optimization and CS validity index

Classifier Ensemble Design using Artificial Bee Colony based Feature Selection

Feature Selection as an Improving Step for Decision Tree Construction

Under-Sampling Approaches for Improving Prediction of the Minority Class in an Imbalanced Dataset

Maximum Variance Combined with Adaptive Genetic Algorithm for Infrared Image Segmentation

Edge Detection in Noisy Images Using the Support Vector Machines

Classifying Acoustic Transient Signals Using Artificial Intelligence

Identification of a Gaussian Fuzzy Classifier

A hybrid sequential approach for data clustering using K-Means and particle swarm optimization algorithm

THE PATH PLANNING ALGORITHM AND SIMULATION FOR MOBILE ROBOT

Backpropagation: In Search of Performance Parameters

Machine Learning. Topic 6: Clustering

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

An Optimal Algorithm for Prufer Codes *

Collaboratively Regularized Nearest Points for Set Based Recognition

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Fuzzy Rough Neural Network and Its Application to Feature Selection

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Smoothing Spline ANOVA for variable screening

Using Particle Swarm Optimization for Enhancing the Hierarchical Cell Relay Routing Protocol

An Image Fusion Approach Based on Segmentation Region

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

CHAPTER 4 OPTIMIZATION TECHNIQUES

Face Recognition Method Based on Within-class Clustering SVM

Data Mining For Multi-Criteria Energy Predictions

Fault Diagnosis of Sucker-Rod Pumping System Using Support Vector Machine

GA-Based Learning Algorithms to Identify Fuzzy Rules for Fuzzy Neural Networks

A fast algorithm for color image segmentation

Analysis of Particle Swarm Optimization and Genetic Algorithm based on Task Scheduling in Cloud Computing Environment

A classification scheme for applications with ambiguous data

Machine Learning: Algorithms and Applications

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

A New Hybrid Method Based on Improved Particle Swarm Optimization, Ant Colony Algorithm and HMM for Web Information Extraction

Evolutionary Support Vector Regression based on Multi-Scale Radial Basis Function Kernel

Japanese Dependency Analysis Based on Improved SVM and KNN

Predator-Prey Pigeon-Inspired Optimization for UAV Three-Dimensional Path Planning

A Saturation Binary Neural Network for Crossbar Switching Problem

Solving two-person zero-sum game by Matlab

Fingerprint matching based on weighting method and SVM

Parallel Artificial Bee Colony Algorithm for the Traveling Salesman Problem

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Image Segmentation of Thermal Waving Inspection based on Particle Swarm Optimization Fuzzy Clustering Algorithm

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Yan et al. / J Zhejiang Univ-Sci C (Comput & Electron) in press 1. Improving Naive Bayes classifier by dividing its decision regions *

Histogram based Evolutionary Dynamic Image Segmentation

Complex System Reliability Evaluation using Support Vector Machine for Incomplete Data-set

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b

Non-Negative Matrix Factorization and Support Vector Data Description Based One Class Classification

ENERGY EFFICIENCY OPTIMIZATION OF MECHANICAL NUMERICAL CONTROL MACHINING PARAMETERS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

Classification Of Heart Disease Using Svm And ANN

A soft computing approach for modeling of severity of faults in software systems

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

A Statistical Model Selection Strategy Applied to Neural Networks

Remote Sensing Textual Image Classification based on Ensemble Learning

MULTIOBJECTIVE OPTIMIZATION USING PARALLEL VECTOR EVALUATED PARTICLE SWARM OPTIMIZATION

Transcription:

Research of Neural Network Classfer Based on FCM and PSO for Breast Cancer Classfcaton Le Zhang 1, Ln Wang 1, Xujewen Wang 2, Keke Lu 2, and Ajth Abraham 3 1 Shandong Provncal Key Laboratory of Network Based Intellgent Computng, Unversty of Jnan, 250022 Jnan, Chna {Zhangle,se_wangln}@ujn.edu.cn 2 Unversty of Jnan, 250022 Jnan, Chna kevnxw@foxmal.com,dp_lukk@ujn.edu.cn 3 Machne Intellgence Research Labs (MIR Labs), USA ajth.abraham@eee.org Abstract. Breast cancer s one of the most common tumors related to death n women n many countres. In ths paper, a novel neural network classfcaton model s developed. The proposed model uses floatng centrods method and partcle swarm optmzaton algorthm wth nerta weght as optmzer to mprove the performance of neural network classfer. Wsconsn breast cancer datasets n UCI Machne Learnng Repostory are tested wth neural network classfer of the proposed method. Expermental results show that the developed model mproves search convergence and performance. The accuracy of classfcaton of bengn and malgnant tumors could be mproved by the developed method compared wth other classfcaton technques. Keywords: Floatng Centrods Method, PSO, Neural Network, Breast Cancer Classfcaton. 1 Introducton Breast cancer s one of the most common malgnant tumors among women. Varous artfcal ntellgence technques [1-6] have been used to mprove the accuracy and effcency of cancer dagnoss. In recent years, neural network has wdely been appled to feature extracton, data compresson, data classfcaton, clusterng etc. [7]. The learnng algorthm of neural network s a supervsed learnng method. A classfer s usually constructed accordng to the character of a gven datasets and analyzed the character of the gven datasets. It s produced an exact model for each category data. In recent research, many classfcaton technques have been proposed, such as genetc algorthm [8], neural network [9-12], support vector machne [13-16], decson tree [17,18], etc.. Neural network model has been successfully used n many classfcaton problems among these technques. Conductng neural network classfer, the number of parameters ncludng nput layer, hdden layer and output layer s decded by the property, category and characters of dataset. In neural network classfer, sample s mapped by neural network. E. Corchado et al. (Eds.): HAIS 2012, Part I, LNCS 7208, pp. 647 654, 2012. Sprnger-Verlag Berln Hedelberg 2012

648 L. Zhang et al. Centrod s a pont n partton space and denotes the center of a class. In a conventonal neural network classfer, poston of centrods and the relatonshp between centrods and classes are set manually. In addton, number of centrods s fxed wth reference to the number of classes. Ths fxed-centrod constrant decreases the chance to fnd optmal neural network. Therefore, a novel neural network classfer based on float centrods method (FCM) whch removes the fxed-centrod constrant and spread the centrods throughout the partton space s proposed n our prevous research [19]. However, despte the superor classfcaton results, the tranng effcency s not acceptable whch lmts the scope of applcaton n practce. Therefore, ths paper presents a developed method adopted partcle swarm optmzaton algorthm wth nerta weght as optmzer to mprove the search convergence and performance of FCM. The rest of the paper s organzed as follows: Secton 2 descrbes the partcle swarm optmzaton algorthm wth nerta weght. Secton 3 provdes the floatng centrods method algorthm and the process of learnng algorthm n detal. Experment results are presented n Secton 4, together wth some comparsons wth other classfcaton technques. In the end, Secton 5 concludes wth a bref summary of our study. 2 Partcle Swarm Optmzaton Algorthm wth Inerta Weght The Partcle Swarm Optmzaton (PSO) algorthm whch s put forward by Kennedy and Eberhart s one of evolutonary algorthms [21]. It s an adaptve method that can be used to solve optmzaton problem. Conductng search uses a populaton of partcle whch corresponds to ndvduals. A populaton of partcles s randomly generated ntally. Each partcle s poston represents a possble soluton pont n the problem space. Each partcle has an updatng poston vector x and updatng velocty vector v by movng through the problem space. At each step, a ftness functon evaluatng each partcle s qualty s calculated by poston vector x. The vector p represents the best ever poston of each partcle and p g represents the best poston obtaned so far n the populaton. For each teraton t, followng the ndvdual best poston p and the global best poston p g, the velocty vector of partcle s updated by: v ( 1 1 2 2 g t t + 1) = v ( t) + c φ ( p ( t) x ( t)) + c φ ( p ( t) x ( )) (1) where c 1 and c 2 are postve constant and φ 1 and φ 2 are unformly dstrbuted random number n [0,1]. The velocty vector v s range of [-V max, V max ]. Updatng the velocty vector of partcle by ths method enables the partcle to search around ts

Research of Neural Network Classfer Based on FCM and PSO 649 ndvdual best poston and global best poston. Based on the updated velocty, each partcle changes ts poston accordng to the followng formula: x ( t + 1) = x ( t) + v ( t + 1) (2) Based on ths method, the populaton of partcles tends to cluster together. To avod premature convergence, the followng formula can mprove the velocty vector of each partcle addng the nerta weght factor ω [21]. v ( t + 1) = v ( t) + c1φ1 ( p ( t) x ( t)) + c2φ2 ( pg t ( t) x ( )) ω (3) where nerta weght factor ω s a real number. The nerta weght factor ω controls the magntude of the velocty vector v. The nerta weght factor ω s a rato factor related to the prevous velocty. The nerta weght factor ω decdes the nfluence from the prevous velocty to current velocty. The large nerta weght factor ω can make globally search and the small nerta weght factor ω wll make local search [22]. The fellow method s adopted by the nerta factor ω from ω max toω mn : ω = ( ω ω ) *(maxter ter)/ maxter + ω (4) max mn where maxter s the total number of teraton, ter s the current teraton. Lnearly decreasng the nerta factor ω from ω max to ωmn can mprove the search area frstly and locate the best poston quckly. Wth the nerta weght factor ω gradually decreasng, the velocty of partcle becomes slower and makes local search subtly. The convergence becomes faster owng to add the nerta weght factor. mn 3 The Procedure of Learnng Algorthm for FCM and PSO 3.1 Floatng Centrods Method FCM [19] s an approach whch has some floatng centrods wth class labels spread n the partton space. A sample s labeled by a certan class when the closest centrod of ts correspondng mapped pont s the closest centrods of ths certan class. The partton space s decded by an rregular regons controlled by centrods. To get these floatng centrods, tranng samples are mapped to partton space by neural network, and then, correspondng mapped ponts n partton space wll be parttoned nto several numbers of dsjont subsets usng the K-Means clusterng algorthm [20]. The floatng centrods are defned as computng the cluster centers of these subsets. So, the number of centrods may be more than the number of classes. Fnally, each of these centrods wll be labeled by one class. If tranng ponts of one class are majorty among all the ponts whch belong to a centrod, the centrod s labeled by ths class. More than one centrod can be labeled by one class.

650 L. Zhang et al. The floatng centrods method can acheve the optmzaton objectve of neural network classfer, because t s no constrant n the number of centrods and the class label of centrods are computed by the mapped ponts n partton space. The floatng centrods method has no fxed-centrod. The functon of floatng centrods method s that allocates ponts n the same class together as close as possble and separates ponts n the dfferent classes from the partton space. 3.2 Model of Neural Network Classfer A neural network classfer s a par (F, P), where centrod n P have been colored. The model neural network classfer s descrbed by Fg. 1. If a new dataset need to be categorzed, t can be mapped one pont n P used by F, then predct the category accordng to the dstance whch measured by Eucldan dstance. Ths process s llustrated n Fg.2. The dmenson of partton space s set to two and the number of classes s three. Fg. 1. Model of classfer Fg. 2. Categorzng a new sample 3.3 Centrods Generaton Frstly, the tranng dataset s mapped to partton space usng by neural network. The mapped pont correspondng to tranng data s named colored pont. The colored ponts n partton space are dvded nto k dsjont clusters used by K-means algorthm. Then each centrod s labeled category tag accordng to the prncple whch s

Research of Neural Network Classfer Based on FCM and PSO 651 that f the color ponts of one class are majorty n all of color ponts, the class should color the centrod. One class can color one or more centrods. If the number of datasets between classes s dfferent, t should keep equtableness usng weghts or other solutons. One class colors a centrod by a hgher probablty, f the number of ths class dataset s larger than others. 3.4 The Process of Learnng Algorthm The goal of learnng algorthm s to get the best neural network and t correspondng colored partton space. The process of learnng algorthm s descrbed by Algorthm 1. Algorthm 1: The process of learnng algorthm Input: User-defned parameters and tranng data set; Output: Best neural network and ts correspondng colored partton space; Step 1: Code neural network to form ndvdual accordng to the partcle swarm optmzaton mentoned wth nerta weght n Secton 2; Step 2: whle maxmum generaton has not been reached do; Step 3: for =1 to the number of ndvduals do; Step 4: Decode ndvdual d to form a neural network; Step 5: Perform centrod generaton wth ths neural network to get colored partton space; Step 6: Use ths neural network and ts correspondng colored partton space to compute the value of optmzaton target functon as the ftness of ndvdual ; Step 7: end for; Step 8: Perform an teraton of mproved partcle swarm optmzaton algorthm, update ndvduals by ther ftness; Step 9: end whle; Step 10: return the best neural network and ts correspondng colored partton space. 3.5 Target Functon The optmzaton target functon s defned as the follow formula, F = s = 1 1+ 1 Z e a (1 2 ) (5) where Z s the value of pont sample whch s mappng to the partton space. S s total number of samples n the tranng dataset and a s a real constant whch controls tortuosty. The target functon s to put ponts n the same class together as close as possble and keep ponts n the dfferent class away from others as far as possble.

652 L. Zhang et al. 4 Experments and Results The crteron of testng accuracy s defned to evaluate the performance of ths method. Testng accuracy (TA) whch s a method to obtan better generalzaton capablty s adopted testng data to acheve the accurate rate. TPtest TA = *100% (6) P test where TPtest represent the number of correctly classfed samples n testng dataset and P test s number of total samples n testng dataset. To test the effectveness of the proposed method, we select the Wsconsn breast cancer datasets for the experments whch s selected from the UCI machne learnng database generated by Dr. Wllam from the Unversty of Wsconsn Madson [24]. Breast cancer can be dagnosed bengn or malgnant by puncture samplng. Wsconsn breast cancer dataset conssts of 569 records and 2 classes, whch 357 records are bengn type and 212 records are malgnant type. Each record has 30 attrbutes ncludng ten quantfyng feature, such as dameter, permeter, symmetry etc. The breast cancer datasets s dvded nto two subsets randomly, one s selected as the tranng dataset and the other s the testng dataset. There are 285 records as tranng dataset and 284 records as testng dataset for bengn and malgnant type. Ths two type expermental data have been done respectvely. One class of sample s used for the bengn and the other s used for the malgnant. The normal data s labeled as 1 and the abnormal data s labeled as 0. In our experments, a three-layered feed-forward neural network wth 15 hdden neurons s adopted and the partcle swarm optmzaton algorthm wth nerta weght s selected as optmzer. Ths proposed method s compared wth the other neural network classfers, ncludng Flexble Neural Tree (FNT) [23], Neural Network (NN) and Wavelet Neural Network (WNN) [25]. The parameters of PSO algorthm are mportant factors on searchng the optmal soluton. Through trals on these datasets, the parameters settng of partcle swarm optmzaton algorthm s defned by the follow Table 1. Table 1. Parameters for experments Populaton sze 20 Generaton 3000 W max 0.8 W mx 0.4 C 1 2 C 2 2 V max 3 V mn -3

Research of Neural Network Classfer Based on FCM and PSO 653 Table 2 depcts the accuracy results for testng data wth comparson between our method and other methods. The result ndcates that our proposed method s fully capable of mprovng the generalzaton ablty of neural network classfer. Table 2. Testng accuracy (%) Cancer type FNT NN WNN Our method Bengn 93.31 94.01 94.37 96.47 Malgnant 93.45 95.42 92.96 95.70 5 Conclusons In ths paper, a novel neural network s proposed based on floatng centrods method and partcle swarm optmzaton algorthm. Ths approach makes sample to be mapped nto neural network and partton space s used to categorze data sample. The proposed method removes the fxed-centrod constrant. An mproved partcle swarm optmzaton wth nerta weght s adopted as optmzer n ths paper. It ncreases the varety of partcles and mproves the performance and convergence of standard partcle swarm optmzaton algorthm. To evaluate the performance of our proposed approach, Wsconsn breast cancer datasets from UCI machne learnng repostory s selected to compare the accuracy measure wth FNT, NN and WNN. Expermental results llustrate that our method s more effcent than other methods n classfcaton accuracy of breast cancer dataset. Acknowledgements. Ths work was supported by the Natural Scence Foundaton of Shandong Provnce No. ZR2011FL021, the Natural Scence Foundaton of Chna No.61173078, the Scence and Technology Development Program of Shandong Provnce No. 2011GGX10116. References 1. Hulka, B.S., Moorman, P.G.: Breast Cancer: Hormones and Other Rsk Factors. Maturtas 38(1), 103 113 (2001) 2. Corchado, E., Graña, M., Woznak, M.: New trends and applcatons on hybrd artfcal ntellgence systems. Neurocomputng 75(1), 61 63 (2012) 3. García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced nonparametrc tests for multple comparsons n the desgn of experments n computatonal ntellgence and data mnng: Expermental analyss of power. Informaton Scences 180(10), 2044 2064 (2010) 4. Corchado, E., Abraham, A., Carvalho, A.: Hybrd ntellgent algorthms and applcatons. Informaton Scences 180(14), 2633 2634 (2010) 5. Pedrycz, W., Alev, R.: Logc-orented neural networks for fuzzy neurocomputng. Neurocomputng 73(1-3), 10 23 (2009) 6. Abraham, A., Corchado, E., Corchado, J.M.: Hybrd learnng machnes. Neurocomputng 72(13-15), 2729 2730 (2009)

654 L. Zhang et al. 7. Goldberg, D.E.: Genetc Algorthms n Search, Optmzaton and Machne Learnng. Addson-Wesley (1989) 8. Platt, J., Crstann, N., Shawe Taylor, J.: Large Margn DAGs for Multclass Classfcaton. In: Advances n Neural Informaton Processng Systems, pp. 547 553 (2000) 9. Lu, H., Rudy, S., Huan, L.: Effect data mnng usng neural networks. IEEE Transcaton Knowledge Data Engneer. 8(6), 957 961 (1996) 10. Msraa, B.B., Dehurb, S., Dashc, P.K., Pandad, G.: A reduced and comprehensble polynomal neural network for classfcaton. Pattern Recognton 29(12), 1705 1712 (2008) 11. Daq, G., Yan, J.: Classfcaton methodologes of multlayer perceptons wth sgmod actvaton functons. Pattern Recognton 38(10), 1469 1482 (2005) 12. Chen, Y., Abraham, A., Yang, B.: Feature selecton and classfcaton usng flexble neural tree. Neurocomputng 70(1-3), 305 313 (2006) 13. Vapnk, V.N.: The nature of statstcal learnng theory. Sprnger, New York (1995) 14. Chua, Y.S.: Effcent computatons for large least square support vector machne classfers. Pattern Recognton 24(1-3), 75 80 (2003) 15. Koknar-Tezel, S., Lateck, L.J.: Improvng SVM classfcaton on mbalanced tme seres data sets wth ghost ponts. Knowledge and Informaton Systems 28(1), 1 23 (2011) 16. Benjamn, X.W., Japkowcz, N.: Boostng support vector machnes for mbalanced data sets. Knowledge and Informaton Systems 25(1), 1 20 (2011) 17. Qnlan, J.R.: Introducton of decson trees. Machne Learnng 1(1), 81 106 (1986) 18. Wang, L., Yang, B., Chen, Y., Abraham, A., Sun, H., Chen, Z., Wang, H.: Improvement of Neural Network Classfer usng Floatng Centrods. Knowledge and Informaton Systems (2011), do: 10.1007/s10115-011-0410-8 19. Freund, Y.: Boostng a weak learnng algorthm by majorty. Informaton Computng 121(2), 256 285 (1995) 20. Hartgan, J.A., Wong, M.A.: A k-means clusterng algorthm. Appled Stat. 28(1), 100 108 (1979) 21. Eberhart, R.C., Sh, Y.: Partcle Swarm Optmzaton: Developments, Applcatons and Resources. In: Proc. of the Congress on Evolutonary Computaton, pp. 81 86 (2001) 22. Huang, C.-P., Xong, W.-L., Xu, B.-G.: Influnce of Inerta Weght on Astrngency of Partcle Swarm Algorthm and Its Improvement. Computer Engneerng 34(12), 31 33 (2008) 23. Yang, B., Wang, L., Chen, Z., Chen, Y., Sun, R.: A novel classfcaton method usng the combnaton of FDPS and flexble neural tree. Neurocomputng 73(4-6), 690 699 (2010) 24. Blake, C.L., Merz, C.J., Newman, D.J., et al.: UCI Repostory of Machne Learnng Databases (2002), http://www.cs.uc.edu/mlearn/mlrepostory.html 25. Chen, Y., Abraham, A.: Hybrd Neurocomputng for Detecton of Breast Cancer. In: The Fourth IEEE Internatonal Workshop on Soft Computng as Transdscplnary Scence and Technology (WSTST 2005), pp. 884 892 (2005)