Experimental Evaluation of Latent Variable Models for Dimensionality Reduction

Similar documents
Experimental Evaluation of Latent Variable Models for Dimensionality Reduction

Experimental Evaluation of Latent Variable Models. for Dimensionality Reduction

Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Note Set 4: Finite Mixture Models and the EM Algorithm

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Tight Clusters and Smooth Manifolds with the Harmonic Topographic Map.

Clustering and Visualisation of Data

Unsupervised Learning

Clustering with Reinforcement Learning

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme

Latent Variable Models and Expectation Maximization

t 1 y(x;w) x 2 t 2 t 3 x 1

A Principled Approach to Interactive Hierarchical Non-Linear Visualization of High-Dimensional Data

Random projection for non-gaussian mixture models

Mixture Models and the EM Algorithm

Spatial Outlier Detection

What is machine learning?

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018

Generative and discriminative classification techniques

Experimental Analysis of GTM

Does Dimensionality Reduction Improve the Quality of Motion Interpolation?

Learning a Manifold as an Atlas Supplementary Material

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

3 Feature Selection & Feature Extraction

Semi-Supervised Construction of General Visualization Hierarchies

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

Data Preprocessing. Javier Béjar. URL - Spring 2018 CS - MAI 1/78 BY: $\

Learning Alignments from Latent Space Structures

Clustering Lecture 5: Mixture Model

Hierarchical Gaussian Process Latent Variable Models

COMPUTATIONAL STATISTICS UNSUPERVISED LEARNING

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

9.1. K-means Clustering

Data mining for neuroimaging data. John Ashburner

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

Clustering algorithms

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

Self-organizing mixture models

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing

SMEM Algorithm for Mixture Models

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling

K-Means Clustering. Sargur Srihari

Robust cartogram visualization of outliers in manifold learning

Mixture Models and EM

Implicit Mixtures of Restricted Boltzmann Machines

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

GTM: The Generative Topographic Mapping

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

Split Merge Incremental LEarning (SMILE) of Mixture Models

Simultaneous surface texture classification and illumination tilt angle prediction

Image Analysis, Classification and Change Detection in Remote Sensing

Network Traffic Measurements and Analysis

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

GTM: The Generative Topographic Mapping

Deep Generative Models Variational Autoencoders

10. MLSP intro. (Clustering: K-means, EM, GMM, etc.)

Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation

Expectation Maximization (EM) and Gaussian Mixture Models

08 An Introduction to Dense Continuous Robotic Mapping

Monocular Human Motion Capture with a Mixture of Regressors. Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France

Fuzzy Segmentation. Chapter Introduction. 4.2 Unsupervised Clustering.

Straight Lines and Hough

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

A Simple Generative Model for Single-Trial EEG Classification

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Nested Sampling: Introduction and Implementation

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Data visualisation and exploration with prior knowledge

The Laplacian Eigenmaps Latent Variable Model

Tree of Latent Mixtures for Bayesian Modelling and Classification of High Dimensional Data

Warped Mixture Models

Dimension Reduction CS534

SD 372 Pattern Recognition

Small-scale objects extraction in digital images

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Selection of Scale-Invariant Parts for Object Class Recognition

Grundlagen der Künstlichen Intelligenz

Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification

Machine Learning Methods in Visualisation for Big Data 2018

Trajectory Inverse Kinematics By Conditional Density Models

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010

Summary: A Tutorial on Learning With Bayesian Networks

The K-modes and Laplacian K-modes algorithms for clustering

Machine Learning Lecture 3

Image Coding with Active Appearance Models

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Dynamic Thresholding for Image Analysis

Introduction to Machine Learning CMU-10701

The Curse of Dimensionality

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Inference and Representation

Transcription:

In: Proc. of the 18 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing (NNSP8), pp.5-17, Cambridge, UK. URL: http://www.dcs.shef.ac.uk/ miguel/papers/nnsp8.html Experimental Evaluation of Latent Variable Models for Dimensionality Reduction Miguel Á. Carreira-Perpiñán Steve Renals Dept. of Computer Science, University of Sheffield, Sheffield S1 4DP, UK {M.Carreira,S.Renals}@dcs.shef.ac.uk Abstract We use electropalatographic (EPG) data as a test bed for dimensionality reduction methods based in latent variable modelling, in which an underlying lower dimension representation is inferred directly from the data. Several models (and mixtures of them) are investigated, including factor analysis and the generative topographic mapping (GTM). Experiments indicate that nonlinear latent variable modelling reveals a low-dimensional structure in the data inaccessible to the investigated linear models. 1 Introduction In latent variable modelling, a low-dimensional generative model is estimated from a data sample. A smooth mapping links the low-dimensional representation and the high-dimensional data, and dimensionality reduction is achieved via Bayes theorem. Latent variable models include factor analysis, principal component analysis and the generative topographic mapping (GTM). In this paper, we apply the latent variable framework to electropalatographic data. The technique of electropalatography (EPG) is well established as a relatively noninvasive, conceptually simple and easy-to-use tool for the investigation of lingual activity in both normal and pathological speech. Qualitative and quantitative data about patterns of lingual contacts with the hard palate during continuous speech may be obtained using EPG, and the technique has been used in studies of descriptive phonetics, coarticulation, and diagnosis and treatment of disordered speech (Hardcastle et al., 11a). Typically, the subject wears an artificial palate moulded to fit Page 1

Figure 1: Representative EPGs for the typical stable phase of different phonemes. The 6-dimensional binary EPG vector is pictured rowwise from top to bottom resembling the human palate (top: alveoli; bottom: velum). the upper palate with a number of electrodes mounted on the surface to detect lingual contact (6 in the Reading EPG system). The EPG signal is sampled at a frequency of 1 to Hz and, for a given utterance, the sequence of raw EPG data consists of a stream of binary vectors with both spatial and temporal structure due to the constraints of the articulatory system. Note that the EPG signal alone is an incomplete articulatory description, omitting such details as nasalisation and vocalisation. Hence, the mapping from phonemes to EPG patterns is not one-to-one since certain phonemes (e.g. and ) can produce the same EPG patterns (fig. 1). A number of studies suggest that tongue movements in speech may be appropriately modelled using a few elementary articulatory parameters (e.g. (Nguyen et al., )). In this paper, we consider dimensionality reduction at the spatial level only. We compare the ability of several well-known latent variable models and mixture models to extract relevant structure from an EPG data set in an adaptive fashion. This contrasts with a number of widespread EPG data reduction techniques (Hardcastle et al., 11b) which are based in a priori knowledge about electropalatography, typically in the form of fixed linear combinations of the EPG vector components. Such ad hoc methods are not robust and will not perform well in situations where the speech deviates from the standard (impaired speakers, different speech styles or unusual accents). We also show that nonlinearity is very advantageous for this real-world problem. Generative modelling using latent variables In latent variable modelling the assumption is that the observed high-dimensional data t is generated from an underlying low-dimensional process defined by a small number L of latent variables x (Bartholomew, 187). The latent variables are mapped by a fixed transformation into a D-dimensional data space (measurement procedure) and noise is added there (stochastic variation). The aim is to learn the low dimensional generating process along with a noise model, rather than directly learning a dimensionality reducing mapping. A latent variable model is specified by a prior in latent space p(x), a smooth mapping f from latent space to data space and a noise model in data space p(t x) (fig. ). These three elements are equipped with parameters which we collectively call Θ. Integrating the joint probability density function p(t, x) over latent space gives the marginal distribution in data space, p(t). Given an observed sample in data space {t n } N n=1 of N D-dimensional real vectors that has been generated by an unknown distribution, a parameter estimate can be found by maximising the log-likelihood of Page

Prior p(x) Induced p(t Θ) t t x x f f(x; Θ) t Manifold M t 1 x 1 Latent space of dimension L = Data space of dimension D = Figure : Schematic of a latent variable model with a D data space and a D latent space. the parameters l(θ) = N n=1 log p(t n Θ), typically using an EM algorithm. Once the parameters Θ are fixed, Bayes theorem gives the posterior distribution in latent space given a data vector t, i.e. the distribution of the probability that a point x in latent space was responsible for generating t: p(x t) = p(t x)p(x) p(t) = p(t x)p(x). (1) p(t x)p(x) dx Summarising this distribution in a single latent space point x (typically the mean or the mode) results in a reduced-dimension representative of t. This defines a corresponding dimensionality reducing mapping F from data space onto latent space, x = F(t), which will be most successful when the posterior distribution p(x t) is unimodal and sharply peaked. Applying the mapping f to the reduceddimension representative we obtain the reconstructed data vector t = f(x ). In the usual way, we define the squared reconstruction error for the sample as E = 1 N N n=1 t n t n using the Euclidean norm. We consider the following latent variable models, for which EM algorithms are available: Factor analysis (Bartholomew, 187; Rubin and Thayer, 18), in which the mapping is linear, the prior in latent space is unit Gaussian and the noise model is diagonal Gaussian. The marginal in data space is then Gaussian with a constrained covariance matrix. Principal component analysis (), which can be considered a particular case of factor analysis with isotropic Gaussian noise model (Tipping and Bishop, 17). The generative topographic mapping (GTM) (Bishop et al., 18) is a nonlinear latent variable model, where the mapping is a generalised linear model, the prior in latent space is discrete uniform and the noise model is isotropic Page

Gaussian. The marginal in data space is then a constrained mixture of Gaussians. Finite mixtures of latent variable models may be constructed, and the EM algorithm used to obtain a maximum likelihood parameter estimate (Everitt and Hand, 181). A reduced-dimension representative can be obtained from the mixture component with the highest responsibility, or as the average of the per-component representatives weighted by the responsibilities. We also consider mixtures of multivariate Bernoulli distributions, where each component is parameterised by a D-dimensional probability vector; note that this does not define a latent space, but only a discrete collection of components. Experiments We used a subset of the EUR-ACCOR database (Marchal and Hardcastle, 1), consisting of N = 118 6-bit EPG frames sampled at Hz from 14 different utterances by an English speaker. The data set was split into a training (75%) and a test set (5%). All the data used were unlabelled. Using the training set, we found maximum likelihood estimates for the following probability models: factor analysis,, GTM, mixtures of factor analysers and mixtures of multivariate Bernoulli distributions. Figure shows the prototypes found by several of these methods. Figure 4 gives the log-likelihood and reconstruction error curves for each method on both the training and test sets. Performing the same experiments on data sets where all repeated EPG frames were removed did not change significantly the shape of these curves. Factor analysis and principal component analysis were performed on the EPG data followed by varimax rotation to improve factor interpretability without altering the model. Each 6-dimensional factor loadings vector or principal component vector may be considered as a dimensionality reduction index, in the sense that projecting an EPG frame on the vector is equivalent to a linear combination of its components. The resultant basis vectors are shown in the first two rows of fig. for th-order models. Several of these factors can be associated to well-known EPG data reduction indices or to linear combinations of them; e.g. λ 1 to a velar index or λ to an alveolar one. But note that the derived factors indicate adaptive structure (e.g. asymmetry) which a priori derived indices cannot capture. Several of the principal components are similar to some of the factor loading vectors due to the fact that for this data set the uniquenesses matrix was relatively isotropic (i.e. a multiple of the identity), in which case factor analysis is equivalent to. We used the probabilistic model of (Tipping and Bishop, 17) to compute the log-likelihood curves of fig. 4. outperforms factor analysis in reconstruction error and factor analysis outperforms in log-likelihood. Thus in terms of generative modelling, factor analysis is superior to, but in terms of reconstruction error is a better linear method. Generative topographic mapping (GTM) Both factor analysis and can only extract linear structure from the data. For factor analysis, the null hypothesis that the data sample can be explained with L factors was rejected for all values of Page 4

λ 1 λ λ λ 4 λ 5 λ 6 λ 7 λ 8 λ Λ 1 µ 1 PSfrag π 1 replacements =.68 Λ λ 1 λ λ 1 λ λ λ 4 λ 5 λ 6 λ 7 λ 8 λ µ π =.416 Λ µ π =.85 Λ 4 λ 4 λ 5 λ 6 λ 7 λ 8 λ µ 4 π 4 =.171 λ 1 λ λ λ 4 λ 5 λ 6 λ 7 λ 8 λ Λ 1 µ 1 π 1 =.68 Λ µ π =.416 Λ µ π =.85 Λ 4 µ 4 π 4 =.171 p 1 π 1 =.155 p π =.11 p π =.657 p 4 π 4 =.15 p 5 π 5 =.46 p 6 π 6 =.46 p 7 π 7 =.156 p 8 π 8 =.148 p π =.1 Figure : Prototypes and factors found by several methods. Each 6-dimensional vector is pictured as the EPG frames of fig. 1, but now each pixel is represented by a rectangle whose area is proportional to the magnitude of the pixel value and whose colour is black for positive values and white for negative ones. Row 1: factors after varimax rotation. Row : principal components after varimax rotation. Row : means µ m and factor loadings Λ m for a mixture of factor analysers. Row 4: prototypes for a mixture of multivariate Bernoulli distributions. L < 45 at a significance level of 5%. Using an EM algorithm (Bishop et al., 18), we trained a two-dimensional GTM model with the following parameters: grid in two-dimensional latent space (K = 4 points), scaled to the [ 1, 1] [ 1, 1] square, and F F grid in the same square of F Gaussian basis functions of width equal to the separation between basis functions centres; F varied from to 14. For each data point, we took as reduced-dimension representative the mode of its posterior distribution (which was unimodal and sharply peaked for over % of the data points). The log-likelihood curve for the test set as a function of the number of basis functions used F shows a maximum for F = 4, indicating that overfitting occurs for F > 4 (but observe that the reconstruction error keeps decreasing steadily past that limit). Comparison with the other methods shows that GTM, using a latent space of only L = dimensions, outperforms all the other methods in log-likelihood and reconstruction error in a wide range of latent space dimensions. needs L = 1 principal components to attain the same reconstruction error as GTM with F = 4 basis functions, and all L = 6 components to surpass its Page 5

4 6 8 1 1 14 4 x 15 5 x 14 Log-likelihood 1 1 5 6 4 64 81 1 11 144 GTM MB Log-likelihood 5 1 5 6 4 64 81 1 11 144 GTM MB 15 Reconstruction error Log-likelihood 1 8 7 6 5 4 1 Number of factors or components 5 6 4 64 81 1 11 144 4 6 8 1 1 14 Number of factors or components GTM Log-likelihood MB GTM Reconstruction error 4 6 8 1 1 14 11 1 8 7 6 5 4 Number of factors or components 5 6 4 64 81 1 11 144 1 4 6 8 1 1 14 Number of factors or components MB GTM Figure 4: Comparison between methods in terms of log-likelihood (top) and reconstruction error (bottom) (left: training set; right: test set): factor analysis (), principal component analysis (), generative topographic mapping (GTM), mixtures of factor analysers () and mixtures of multivariate Bernoulli distributions (MB). Note that the x axis refers to the order of the factor analysis or principal component analysis, the number of mixture components in the case of mixture models and the square root of the number of basis functions in the case of GTM. log-likelihood. Mixtures of factor analysers of L = 1 factor per factor analyser 1 were estimated using an EM algorithm (Ghahramani and Hinton, ) with random starting points. Fig. (row ) shows the loading vectors and means for a mixture of M = 4 components. The effect of the model is to place factors (Λ m ) in different regions of data space (µ m ); the factors found coincide with some of the factors found in factor analysis or with linear combinations of them and the means coincide with some of the typical EPG patterns of fig. 1. In the training set, the log-likelihood and reconstruction error of the mixture are always better than that of factor analysis, but there is not much improvement in the test set. The log-likelihood space has a number of local maxima of different log-likelihood value and that explains the ragged appearance of the curves (where each point corresponds to a single estimate, not to an average of 1 This kind of mixture has a number of log-likelihood singularities in parameter space (Heywood cases), and for L > 1 EM failed to converge to a proper solution. Page 6

$ %! $ " #! % Factor 1 Factor Factor.5 1.5 1.5.5 1 1.5 " # 1.5.5 1 1.5 Factor 1.8.6.4...4.6.8 1 1 1 1 18 17 4 15 4 5 6 7 8 1 1 7 1.8.6.4...4.6.8 1 Figure 5: Two-dimensional plot of the trajectory of the highlighted utterance fragment I prefer Kant to Hobbes for a good bedtime book, with phonemic transcription & ')(+*,'.-/1-5476+88:-;=< >@?A/B*CD-5E GH-5>JIKG=8,& '.LM-5>=FN47. Left: using factor analysis (latent space of factors 1 and ). Right: using GTM with F = 4 basis functions and a latent space grid (points are numbered correlatively). The start and end points are marked as and, respectively. The phonemes are those of figure 1. 4 14 1 8 11 7 5 1 6 5 4 1 6 8 several estimates). Mixtures of multivariate Bernoulli distributions were estimated using an EM algorithm (Everitt and Hand, 181) with random starting points. Fig. (row 4) shows the prototypes p m for a -component mixture. Note that each p md value is in [, 1] and thus is readily interpretable as an EPG vector, unlike loading vectors which can have positive and negative values. Again, most of the prototypes coincide with some of the typical EPG patterns of fig. 1. However, the log-likelihood value is less than that of any of the other methods and the reconstruction error is also greater. The reason is that each component of the mixture lacks a latent space and can only reconstruct a data vector as its prototype p m. Two-dimensional visualisation of EPGs We can represent graphically a sequence of EPG frames (sample from an utterance) by plotting their projections in a twodimensional latent space (joining consecutive points with a line). Figure 5 shows such a representation for factor analysis using factors 1 and (left) and GTM (right). For linear projections (such as the ones provided by factor analysis and ), finding the best projection (e.g. in the sense of being as nongaussian as possible) is called projection pursuit. However, this is not the criterion optimised by factor analysis, which, in general, is then not a good method for this aim. The two-dimensional latent space produced by GTM gives a qualitatively better visualisation of the articulatory space than that of two-dimensional linear projections. 4 Discussion Latent variable models can be useful to present the relatively high-dimensional information contained in the EPG sequence in a way which is easier to understand and Page 7

handle. This is a significant advantage over general probability models and mixtures of them, which are suitable for classification or clustering but not for dimensionality reduction. Finite mixtures offer the possibility of fitting, in a soft way, different models in different data space regions and thus offer potential to model complex data. However, training is slow and often difficult due to the log-likelihood surface being plagued with singularities. For the cases studied, overfitting in the log-likelihood can appear if the number of parameters of the model is large enough, but the reconstruction error presents a steady decrease in both the training and test sets for any number of parameters. Unidentified models (in which different, nontrivial combinations of parameter values produce exactly the same distribution) can produce different estimates from the same data set. This is not the case for factor analysis and, and seems not to have much importance for mixtures of multivariate Bernoulli distributions (Carreira-Perpiñán and Renals, 18), but may pose problems of interpretation for the other models. GTM has proven to be the best method both in log-likelihood and reconstruction error, despite being limited to a two-dimensional space due to its computational complexity. This suggests that the intrinsic dimensionality of the EPG data may be substantially smaller than that suggested (5 to 1) by other studies (e.g. (Nguyen et al., )). All the methods we have studied require knowledge of the latent space dimension or the number of mixture components; a possible way to determine the optimal ones is by model selection. Acknowledgments This work was supported by ESPRIT Long Term Research Project SPRACH (77), by a scholarship from the Spanish Ministry of Education and Science and by an award from the Nuffield Foundation. The authors acknowledge Markus Svensén and Zoubin Ghahramani for the Matlab implementations of GTM and the mixtures of factor analysers, respectively, and Alan Wrench for providing them with the ACCOR data. Matlab implementations of principal component analysis, factor analysis (and varimax rotation) and mixtures of multivariate Bernoulli distributions are available from the first author at URL http://www.dcs.shef.ac.uk/ miguel/. References Bartholomew, D. J. (187). Latent Variable Models and Factor Analysis. Charles Griffin & Company Ltd., London. Bishop, C. M., Svensén, M., and Williams, C. K. I. (18). GTM: The generative topographic mapping. Neural Computation, 1(1):15 4. Carreira-Perpiñán, M. Á. and Renals, S. J. (18). On finite mixtures of multivariate Bernoulli distributions. Submitted to Neural Computation. Everitt, B. S. and Hand, D. J. (181). Finite Mixture Distributions. Monographs on Statistics and Applied Probability. Chapman & Hall, London, New York. Page 8

Ghahramani, Z. and Hinton, G. E. (). The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-6-1, University of Toronto. Hardcastle, W. J., Gibbon, F. E., and Jones, W. (11a). Visual display of tonguepalate contact: Electropalatography in the assessment and remediation of speech disorders. Brit. J. of Disorders of Communication, 6:41 74. Hardcastle, W. J., Gibbon, F. E., and Nicolaidis, K. (11b). EPG data reduction methods and their implications for studies of lingual coarticulation. J. of Phonetics, 1:51 66. Marchal, A. and Hardcastle, W. J. (1). ACCOR: Instrumentation and database for the cross-language study of coarticulation. Language and Speech, 6(, ):17 15. Nguyen, N., Marchal, A., and Content, A. (). Modeling tongue-palate contact patterns in the production of speech. J. of Phonetics, 4:77 7. Rubin, D. B. and Thayer, D. T. (18). Psychometrika, 47(1):6 76. EM algorithms for ML factor analysis. Tipping, M. E. and Bishop, C. M. (17). Mixtures of principal component analysers. In Proceedings of the IEE Fifth International Conference on Artificial Neural Networks. London:IEE. Page