BINAURAL SOUND LOCALIZATION FOR UNTRAINED DIRECTIONS BASED ON A GAUSSIAN MIXTURE MODEL

Similar documents
Reverberation design based on acoustic parameters for reflective audio-spot system with parametric and dynamic loudspeaker

Interpolation method of head-related transfer functions in the z-plane domain using a common-pole and zero model

Surrounded by High-Definition Sound

3D Audio Perception System for Humanoid Robots

Room Acoustics. CMSC 828D / Spring 2006 Lecture 20

APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION.

Measurement of pinna flare angle and its effect on individualized head-related transfer functions

AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN

Horizontal plane HRTF reproduction using continuous Fourier-Bessel functions

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

Measurement of 3D Room Impulse Responses with a Spherical Microphone Array

The FABIAN head-related transfer function data base

IEEE Proof Web Version

Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM)

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

A fast calculation method of the head-related transfer functions for multiple source points based on the boundary element method

CHAPTER 3 WAVEFRONT RECONSTRUCTION METHODS. 3.1 Spatial Correlation Method

Introduction to HRTFs

Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM)

Subband selection for binaural speech source localization

AUDIO SIGNAL PROCESSING FOR NEXT- GENERATION MULTIMEDIA COMMUNI CATION SYSTEMS

GYROPHONE RECOGNIZING SPEECH FROM GYROSCOPE SIGNALS. Yan Michalevsky (1), Gabi Nakibly (2) and Dan Boneh (1)

Active noise control at a moving location in a modally dense three-dimensional sound field using virtual sensing

Proceedings of Meetings on Acoustics

Overview Introduction A Brief CBT History and Overview Steps to Create a Ground-Plane Array from a Free-Standing Array Sound-Field Simulations Measure

A NEW DCT-BASED WATERMARKING METHOD FOR COPYRIGHT PROTECTION OF DIGITAL AUDIO

Loudspeaker Complex Directional Response Characterization

Parametric Coding of Spatial Audio

PHASE AND LEVEL DIFFERENCE FUSION FOR ROBUST MULTICHANNEL SOURCE SEPARATION

SPEECH FEATURE EXTRACTION USING WEIGHTED HIGHER-ORDER LOCAL AUTO-CORRELATION

HRTF MAGNITUDE MODELING USING A NON-REGULARIZED LEAST-SQUARES FIT OF SPHERICAL HARMONICS COEFFICIENTS ON INCOMPLETE DATA

Evaluation of a new Ambisonic decoder for irregular loudspeaker arrays using interaural cues

Upper-limit Evaluation of Robot Audition based on ICA-BSS in Multi-source, Barge-in and Highly Reverberant Conditions

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH

Model-Based Expectation Maximization Source Separation and Localization

EE482: Digital Signal Processing Applications

Room Acoustics Computer Modeling: Study of the Effect of Source Directivity on Auralizations

Dr Andrew Abel University of Stirling, Scotland

Sontacchi A., Noisternig M., Majdak P., Höldrich R.

Audio-visual interaction in sparse representation features for noise robust audio-visual speech recognition

2-2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto , Japan 2 Graduate School of Information Science, Nara Institute of Science and Technology

An imaging technique for subsurface faults using Teleseismic-Wave Records II Improvement in the detectability of subsurface faults

Robustness of Non-Exact Multi-Channel Equalization in Reverberant Environments

Detection, Classification, & Identification of Objects in Cluttered Images

TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 23, NO. 4, APRIL

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

ENSEA conference Small acoustics. Jeremie Huscenot January 8, 2000

Identifying the number of sources in speech mixtures with the Mean Shift algorithm

Neetha Das Prof. Andy Khong

Bio-inspired Sound Source Localization for Compensating Sound Diffraction Using Binaural Human Head and Torso

JND-based spatial parameter quantization of multichannel audio signals

Combining Localization Cues and Source Model Constraints for Binaural Source Separation

ON THE WINDOW-DISJOINT-ORTHOGONALITY OF SPEECH SOURCES IN REVERBERANT HUMANOID SCENARIOS

From Shapes to Sounds: A perceptual mapping

Sequential Organization of Speech in Reverberant Environments by Integrating Monaural Grouping and Binaural Localization

THE PERFORMANCE of automatic speech recognition

EASE Seminar Entry Level & Advanced Level

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

International Journal of the Korean Society of Precision Engineering, Vol. 1, No. 1, June 2000.

Principles of Audio Coding

Audio-coding standards

Real-Time Semi-Blind Speech Extraction with Speaker Direction Tracking on Kinect

ENERGY-BASED BINAURAL ACOUSTIC MODELING. Natalie Agus, Hans, Anderson, Jer-Ming Chen, and Simon Lui Singapore University of Technology and Design

Image denoising in the wavelet domain using Improved Neigh-shrink

Adaptive Quantization for Video Compression in Frequency Domain

Online PLCA for Real-time Semi-supervised Source Separation

A SOURCE LOCALIZATION/SEPARATION/RESPATIALIZATION SYSTEM BASED ON UNSUPERVISED CLASSIFICATION OF INTERAURAL CUES

Two-Layered Audio-Visual Speech Recognition for Robots in Noisy Environments

Proceedings of Meetings on Acoustics

Robust color segmentation algorithms in illumination variation conditions

A Route Selection Scheme for Multi-Route Coding in Multihop Cellular Networks

Mpeg 1 layer 3 (mp3) general overview

Image Quality Assessment based on Improved Structural SIMilarity

A Generalized Method for Fractional-Octave Smoothing of Transfer Functions that Preserves Log-Frequency Symmetry

The Cocktail Party Robot: Sound Source Separation and Localisation with an Active Binaural Head

Temperature Distribution Measurement Based on ML-EM Method Using Enclosed Acoustic CT System

Elliptical Head Tracker using Intensity Gradients and Texture Histograms

An auditory localization model based on. high frequency spectral cues. Dibyendu Nandy and Jezekiel Ben-Arie

A Non-Iterative Approach to Frequency Estimation of a Complex Exponential in Noise by Interpolation of Fourier Coefficients

REAL-TIME BINAURAL AUDIO RENDERING IN THE NEAR FIELD

2014, IJARCSSE All Rights Reserved Page 461

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Modelling, Auralization and Acoustic Virtual Reality ERIK MOLIN

Evaluation of a MUSIC-based real-time sound localization of multiple sound sources in real noisy environments

AFMG. EASE Seminar September 17 th to 21 st 2018, Berlin, Germany. Agenda. Software-Engineering Research Development

Subpixel Corner Detection Using Spatial Moment 1)

System Identification Related Problems at

System Identification Related Problems at SMN

Capturing, Computing, Visualizing and Recreating Spatial Sound

Absolute Scale Structure from Motion Using a Refractive Plate

Robust Video Super-Resolution with Registration Efficiency Adaptation

FUSION MODEL BASED ON CONVOLUTIONAL NEURAL NETWORKS WITH TWO FEATURES FOR ACOUSTIC SCENE CLASSIFICATION

Application of Linux Audio in Hearing Aid Research

Diffuse reverberation model for efficient image-source simulation of room impulse responses

Directivity Measurement with Turntables AN54

Distributed Signal Processing for Binaural Hearing Aids

Optimization of Observation Membership Function By Particle Swarm Method for Enhancing Performances of Speaker Identification

Blind Deconvolution with Sparse Priors on the Deconvolution Filters

Visualizing diffraction of a loudspeaker enclosure

Variable-Component Deep Neural Network for Robust Speech Recognition

Transcription:

BINAURAL SOUND LOCALIZATION FOR UNTRAINED DIRECTIONS BASED ON A GAUSSIAN MIXTURE MODEL Takanori Nishino and Kazuya Takeda Center for Information Media Studies, Nagoya University Furo-cho, Chikusa-ku, Nagoya, 464-863, Japan phone: + (81) 52-789-421, fax: + (81) 52-789-3172, email: nishino@media.nagoya-u.ac.jp web: http://www.sp.m.is.nagoya-u.ac.jp/%7enishino/ Graduate School of Information Science, Nagoya University Furo-cho, Chikusa-ku, Nagoya, 464-863, Japan phone: + (81) 52-789-3629, fax: + (81) 52-789-3172, email: kazuya.takeda@nagoya-u.jp ABSTRACT We propose and evaluate estimation methods for all sound source directions on the horizontal plane based on a Gaussian mixture model (GMM) using binaural signals. An estimation method based on GMM can estimate a sound source direction on which GMM has already been trained; however, it cannot estimate without a model that corresponds to a sound source direction. Three methods with interpolation techniques are investigated. Two generate GMMs for all directions by interpolating an acoustic transfer function or statistical values of GMM, and the other calculates the posterior probability for all directions with a limited number of GMMs. In our experiments, we investigated six interval conditions. From the results, the interpolation methods of an acoustic transfer function and the statistical values of GMM achieve better performance. Although there were 12 trained GMMs for the 3 intervals, the interpolation method of the statistical values of GMM estimated 62.5 % accuracy (45/72) with 2.8 estimation error. These results indicate that the proposed method can estimate all sound source directions with a small amount of known information. 1. INTRODUCTION The detection of sound source direction is a crucial technique widely used in such fields as speech enhancement, sound recording, security systems, and so on. Studies based on microphone arrays are abundant and employ many microphones to obtain high detection performance. Reducing the number of microphones would lower costs and facilitate maintenance. We can perform a sound localization using both ears. A binaural signal can be represented as a convolution of a sound source signal and a binaural room impulse response (BRIR). A BRIR is composed of a head-related transfer function (HRTF) and a room impulse response that represents an acoustic environment. We can perceive the sound source direction with an interaural time difference and an interaural level difference (ILD) that are included in binaural signals. Previous research examined the probability distributions of these interaural differences[1], and a sound localization model using its distribution has also been proposed[2]. Estimation methods using HRTF were also examined for robot hearing [3, 4, 5, 6]. The detection of sound source direction is performed by comparing the acquired memory of the sound localization and the information obtained from the current sound [7]. A method based on a Gaussian mixture model (GMM) was investigated as one training-based scheme [8]. In that study, experiments for estimating sound source direction were conducted in eight reverberation conditions, and the results showed that this method could estimate any sound source direction in a room that has several reverberation times. Moreover, we conducted experiments for an in-car environment[9], and the results showed that our evaluated method is robust to sound source situations and the influence of noise. These training-based methods can estimate the trained direction; however, basically, the sound source of the untrained direction cannot be estimated. GMMs must be prepared for all directions, which is difficult because measuring the binaural signals for all sound source directions and computer resources is expensive. In this paper, we propose a novel estimation method for all sound source directions using a binaural signal based on GMM. Three estimation methods for all sound source directions were investigated using interpolation techniques. Our method applied interpolation methods to a BRIR, the statistical value of GMM, and posterior probability. Estimation performances were evaluated by correct rate and estimation error. 2. SOUND LOCALIZATION BASED ON GMM 2.1 Feature parameter In our method, ILD is represented by a cepstrum-like parameter called the ILD cepstrum that represents its rough tendencies. It also divides components based on HRTFs and room environments. The ILD cepstrum was obtained with the following procedure: 1. The signals that arrive at the left and right ears are denoted as s L [t] and s R [t], respectively. Both are truncated with a Hamming window whose frame length and frame shift is l = 128 and l s = 32, respectively, based on the results of our preliminary experiment. 2. ILD is calculated using Eq. (1). However, when one of the signals has a lower absolute value of amplitude than the threshold, ILD calculation is not conducted: S LR ( f k ) = S L( f k ) S R ( f k ), (1) where S L ( f k ) is the magnitude response of the left ear s signal, S R ( f k ) is the right ear, and f k denotes the frequency. We used a threshold of.5 based on the results of our preliminary experiment.

3. The Fourier transform is applied to the logarithm of ILD, and feature parameter c[n] is obtained using Eq. (2). In our study, the ILD cepstrum is denoted by c[n]. Thus, the ILD envelope is obtained by lower-order ILD cepstrums: c[n]= 1 N N 1 log 1 S LR ( f k ) e j2πln/n l= (n =,1,,N). (2) The condition of N = 15 was also investigated. If a BRIR consists of the convolution of the room impulse response and HRTF, the ILD cepstrum can be represented as the sum of components resulting from the room impulse response and HRTF. 2.2 GMM training and sound source direction estimation with GMM GMM, a statistical model that represents the linear combination of Gaussian distributions, is often used for speech recognition, speaker identification, and so on. In our method, a Gaussian model for every direction was prepared and trained using binaural signals. The training procedure is described as follows: 1. The distribution of the ILD cepstrum c[n] givenbyeq. (2) is approximated with Gaussian distribution, which is considered the statistical model for estimating sound source direction. Statistical models for every sound source direction can be represented by: λ θ = { w θ,m, μ θ,m,σ θ,m m = 1,2,,M }. 2. The expectation maximization (EM) algorithm gives the weight for each distribution w m, mean μ m,andcovariance matrix Σ m. Then estimation model λ θ is trained for every sound source direction. Our method uses the diagonal covariance matrix. In our experiments, a single mixture model (M = 1) was used because distributions of the ILD cepstrum were unimodal. Below is the procedure for estimating sound source direction with the Gaussian model: 1. The ILD cepstrum c[n] of the input signals is calculated with the same procedure as in Section 2.1. 2. The posterior probability between the ILD cepstrums of the input signals and every trained Gaussian model is calculated. The direction of the model that gives maximum posterior probability is considered the sound source direction. Figure 1 shows a posterior probability map when GMMs for all directions were prepared. The input signal (test signal) is identical as a training signal for every azimuth. In this figure, the horizontal axis is a target azimuth that corresponds to an input signal, and the vertical is an evaluated azimuth that corresponds to the trained GMMs. Colors represent posterior probability. Posterior probabilities were normalized at every target azimuth because the calculations of posterior probability are independently performed every target azimuth. If sound localization is performed correctly, as shown in Fig. 1, a dark red appears on a diagonal. In addition, note that higher probabilities appear at directions that cause front-back confusion. 36 27 18 9 9 18 27 36 Figure 1: Posterior probability map when training and test signals are identical. 3. METHOD FOR UNTRAINED SOUND SOURCE DIRECTION ESTIMATION Three methods to estimate every sound source direction were investigated using interpolation techniques. The first and second methods prepare GMMs for all directions with an interpolation method. In our study, since binaural signals were obtained by convolving a BRIR and an evaluation signal, an interpolation method was applied to the BRIRs. If interpolation is performed precisely, good estimation can be achieved. However, neither method decreases the number of calculations of the posterior probability. The third method, which interpolates posterior probability, can decrease the number of calculations of the posterior probability. Method 1: BRIR was interpolated by a simple linear interpolation method[1]. Interpolated BRIR ĥ[t] is given by ĥ[t]=rh θ1 [t]+(1 r)h θ2 [t τ], r 1, (3) where h θ1 [t] and h θ2 [t] are measured BRIRs, r is a dividing ratio, and τ is decided by the cross correlation coefficient between h θ1 [t] and h θ2 [t]. To obtain higher interpolation performance, first, the BRIRs were applied 1 times upsampling [11]. After interpolation, a downsample method was applied. Except for the measured directions, the GMM of every direction was trained with the interpolated BRIR. Method 2: A GMM has means, variances, and weight coefficients. GMMs were trained using the measured BRIR to decide statistical values. A statistical vector was defined with calculated values, for example: μ = {μ,, μ θ,, μ 36 }. The statistical values of an arbitrary direction were obtained by interpolating the vectors with the cubic spline method. Method 3: GMMs were trained using the measured BRIR and posterior probabilities P θ between the evaluated signals and

then calculated. The posterior probability vector was defined as follows for interpolation: P = {P,,P θ,,p 36 }. A posterior probability of a desired direction was obtained by interpolating P with the cubic spline method. 4. EXPERIMENTS 4.1 BRIR measurement BRIRs were measured with a head-and-torso simulator (HATS, B&K 4128) in an echoic measurement room with a swept sine signal [12] at durations of 1.365 s transduced by a loudspeaker (BOSE Acoustimass). The HATS was positioned on a turntable, and the loudspeaker was placed on an arched traverse. Both the turntable and the arched traverse could be moved at intervals of 1 with.3 accuracy. The distance between the loudspeaker and the center of the bitragion was 1.2 m. BRIRs were measured for 72 azimuths on the horizontal plane at a sampling frequency of 48 khz. The reverberation time of the measurement room was 151 ms, and the background noise level of the measurement room was 19.1 db(a). The azimuth angles of the sound source (loudspeaker) corresponded to the following: the front was,the left was 9, the right was 27, and the angle directly behind the HATS was 18. 4.2 Experimental conditions In the experiment, a binaural signal was obtained by convolution of the BRIR, and a kind of bubble noise was generated by superposing many speech signals. We used the bubble noise with 24 superpositions, whose signal durations were 2 s. Signal durations for training and evaluation were identical. The evaluated intervals were 1, 15, 3, 45, and 6.For example, in the case of 1 intervals, GMMs were trained for,1,, 36 (same as ), and 72 azimuths (,5,, 355 ) were estimated. Method 1, the BRIRs must be interpolated for an arbitrary azimuth from the BRIRs that were measured at the evaluated intervals. Interpolation performances were evaluated by the signal-to-deviation ratio (SDR) and the spectral distortion (SD): SDR = 1log 1 N n=1 N n=1 h 2 [n] {h[n] ĥ[n]} 2 [db], (4) where h[n] is the measured BRIR and ĥ[n] is the interpolated BRIR. N is the duration of the BRIR, and N = 65,536 was used. The interpolated BRIR is more similar to the measured BRIR when a large SDR is obtained. SD = 1 K K k=1 ( ) H( f k ) 2 2log 1 [db], (5) Ĥ( f k ) where H( f k ) is the frequency magnitude response of the measured HRTF, Ĥ( f k ) is that of the interpolated HRTF, and f k is the frequency. K is the number of frequency bins, Signal to deviation ratio [db] 14 12 1 8 6 4 2 1 2 3 4 5 6 Left ear Right ear Figure 2: Average signal-to-deviation ratio of interpolated BRIRs. Solid and dashed lines representperformanceof leftand right-ear BRIRs, respectively. Spectral distortion [db] 1 8 6 4 2 Left ear Right ear 1 2 3 4 5 6 Figure 3: Average spectral distortion of interpolated BRIRs. Solid and dashed lines represent performance of left- and right-ear BRIRs, respectively. and K = 513 was used. The interpolated HRTF is more similar to the measured HRTF when a small SD score is obtained. Figures 2 and 3 show the performances. Since the interval was large, the interpolation performances were worse. The interpolation performances of BRIR were worse than the HRTF interpolation [1] because BRIR s duration was long and it includes a component concerned with the reverberation. 4.3 Results Figures 4 and 5 show the correct rate and the estimation error, respectively. The correct answer was that the target and the estimated azimuth were identical. Estimation error was calculated by e = 1 72 72 i=1 θ i ˆθ i [deg], (6) where θ i is the i-th target azimuth and ˆθ i is the i-th estimated azimuth. Trained directions were included in the calculation of the correct rate and error due to reflect the possibility of mistakes at the trained directions. From the results, since the interval was large, the performances were worse. The difference between Methods 1

Correct rate [%] Estimation error [deg] 1 9 8 7 6 5 4 3 2 1 2 15 1 5 Method 1 Method 2 Method 3 1 2 3 4 5 6 Figure 4: Correct rate. Method 1 Method 2 Method 3 1 2 3 4 5 6 Figure 5: Estimation error. and 2 is small, and both methods could estimate greater than 6 % accuracy within 3 estimation error under 3 intervals. Method 1, the estimation performances have a strong correlation with the SDR instead of the SD score. The correlation coefficient between the results of Method 1 and the SDR was.96. Therefore, the interpolation method must be improved to obtain good estimation performances. Figures 6 to 8 show the posterior probability maps and estimated directions for 3 intervals, and Figures 9 to 11 are for the 6 intervals. In these figures, the horizontal axis is a target azimuth that corresponds to an input signal, and the vertical is an evaluated azimuth that corresponds to the trained GMMs. Colors represent the posterior probability, and dots are the estimated direction. Except for Figure 11, these figures all show that the posterior probability map resembles the original map (Fig. 1). Therefore, the sound source direction can be estimated using a small amount of known information. However, since estimation performances were worse at directions around both ears, improvement is still necessary. Method 3 for large intervals, variable intervals must be examined instead of equivalent intervals. 5. CONCLUSION Three methods based on GMM for estimating all sound source directions on the horizontal plane were investigated. The results showed that good performances were obtained by preparing the GMM for all directions. Both methods estimated with greater than 6 % accuracy within 3 estimation error under intervals of 3. Even though the method of interpolating the posterior probability did not obtain good performance, it is still considered useful because the posterior probability maps of the three methods are similar and the number of calculations was decreased significantly. Future work includes conducting experiments under several reverberation conditions, using an interaural time difference, and applying it to robot hearing. Acknowledgment This work was supported by a Grantin-Aid for Scientific Research (No. 177183 and No. 18364). REFERENCES [1] P. M. Zurek, Probability distributions of interaural phase and level differences in binaural detection stimuli, J. Acoust. Soc. Am., vol.9, pp. 1927 1932, 1991. [2] D. Heavens and T. Pearce, Binaural sound localization by level distributions, Proc. ICA27, PPA-5-12, 27. [3] F. Keyrouz, Y. Naous, and K. Diepold, A new method for binaural 3-D localization based on HRTFs, Proc. ICASSP26, vol.5, pp. 341 344, 26. [4] J. Hörnstein, M. Lopes, J. Santos-Victor, and F. Lacerda, Sound localization for humanoid robots - building audio-motor maps based on the HRTF, Proc. IROS26, 26. [5] Z. Zhang, K. Itoh, S. Horihata, T. Miyake, and T. Imamura, Estimation of sound source direction using a binaural model, Proc. SICE-ICASE 26 International Joint Conference, pp. 451 456, 26. [6] L. Calmes, G. Lakemeyer, and H. Wagner, Azimuthal sound localization using coincidence of timing across frequency on a robotic platform, J. Acoust. Soc. Am., vol. 121, pp. 234-248, 27. [7] G. Plenge, Über das problem der im-kopf-lokalization [On the problem of in head localization], Acustica, vol. 26, pp. 241 252, 1972. [8] T. Nishino, N. Inoue, K. Itou, and K. Takeda, Estimation of sound source direction based on Gaussian model using envelope of interaural level difference, J. Acoust. Soc. Jpn., vol. 63, pp. 3 12, 27. [9] M. Takimoto, T. Nishino, H. Hoshino and K. Takeda, Estimation of speaker and listener positions in a car using binaural signals, Acoust. Sci. & Tech., vol. 29, pp.11 112, 28. [1] T. Nishino, S. Mase, S. Kajita, K. Takeda, and F. Itakura, Interpolating HRTF for auditory virtual reality, J. Acoust. Soc. Am, vol. 1, p. 262, 1996. [11] K. Watanabe, S. Takane, and Y. Suzuki, Interpolation of head-related transfer functions based on the commonacoustical-pole and residue model, Acoust. Sci. & Tech., vol. 24, pp. 335 337, 23. [12] N. Aoshima, Computer-generated pulse signal applied for sound measurement, J. Acoust. Soc. Am., vol. 69, pp. 1484 1488, 1981.

36 36 27 18 9 27 18 9 9 18 27 36 9 18 27 36 Figure 6: Posterior probability map for Method 1 for 3 intervals. Correct rate is 63.9 %, and estimation error is 2.4. Figure 9: Posterior probability map for Method 1 for 6 intervals. Correct rate is 22.2 %, and estimation error is 7.7. 36 36 27 18 9 27 18 9 9 18 27 36 9 18 27 36 Figure 7: Posterior probability map for Method 2 for 3 intervals. Correct rate is 62.5 %, and estimation error is 2.8. Figure 1: Posterior probability map for Method 2 for 6 intervals. Correct rate is 22.2 %, and estimation error is 6.8. 36 36 27 18 9 27 18 9 9 18 27 36 9 18 27 36 Figure 8: Posterior probability map for Method 3 for 3 intervals. Correct rate is 38.9 %, and estimation error is 5.2. Figure 11: Posterior probability map for Method 3 for 6 intervals. Correct rate is 12.5 %, and estimation error is 16.3.