SPATIAL FREQUENCY RESPONSE SURFACES (SFRS S): AN ALTERNATIVE VISUALIZATION AND INTERPOLATION TECHNIQUE FOR HEAD-RELATED TRANSFER FUNCTIONS (HRTF S)

Size: px
Start display at page:

Download "SPATIAL FREQUENCY RESPONSE SURFACES (SFRS S): AN ALTERNATIVE VISUALIZATION AND INTERPOLATION TECHNIQUE FOR HEAD-RELATED TRANSFER FUNCTIONS (HRTF S)"

Transcription

1 Cheng and Wakefield Page 1 of 13 Pre-print: AES16: Rovaniemi, Finland SPATIAL FREQUENCY RESPONSE SURFACES (SFRS S): AN ALTERNATIVE VISUALIZATION AND INTERPOLATION TECHNIQUE FOR HEAD-RELATED TRANSFER FUNCTIONS (HRTF S) COREY I. CHENG 1 AND GREGORY H. WAKEFIELD 2 1 University of Michigan, Department of Electrical Engineering and Computer Science Ann Arbor, Michigan, U.S.A. coreyc@eecs.umich.edu 2 University of Michigan, Department of Electrical Engineering and Computer Science Ann Arbor, Michigan, U.S.A. ghw@eecs.umich.edu This paper introduces Spatial Frequency Response Surfaces (SFRS s), an alternative, spatial-domain visualization tool for head-related transfer functions (HRTF s). Qualitative analysis of SFRS s is easier than direct comparison of HRTF magnitude responses, and reveals many cross-subject similarities in HRTF data corresponding to well-known HRTFrelated psychophysical phenomena. We present an SFRS-based HRTF interpolation algorithm, along with results of a psychophysical experiment comparing the perceptual quality of interpolated and non-interpolated HRTF s. 1. INTRODUCTION Previous psychophysical studies have concluded that knowledge of the acoustic filtering properties of the pinna, head, and torso are important in human localization of sounds in space. The combined effects of such filtering are summarized by a single set of spatially dependent filters known as Head-Related Transfer Functions (HRTF s). HRTF s are can be empirically measured, and are commonly approximated as FIR filters. The directional component of the HRTF is called the Directional Transfer Function (DTF) and is the quantity often used in synthesizing virtual auditory space. DTF s can be computed from HRTF s and the HRTF measurement apparatus transfer function [10]. This paper presents an alternative visualization of HRTF data by processing existing HRTF data sets into Spatial Frequency Response Surfaces (SFRS s). The purpose of this paper is to describe this spatial domain visualization method, and to show that this representation of HRTF data is both qualitatively and quantitatively useful. Specifically, we show that a simple visual inspection of SFRS s provides insight into many features of HRTF data which are difficult to see with a traditional visual inspection of raw HRTF magnitude frequency responses. Also, we show that SFRS s can be used to detect gross calibration errors in the HRTF measurement apparatus. Finally, we propose a spatial, SFRS-based HRTF interpolation algorithm along with the results of a perceptual evaluation of the algorithm. The present work focuses on several subjects HRTF data sets, where each data set consists of left and right ear magnitude responses measured at 400 different azimuth-elevation locations. Although irregularly spaced, these locations are roughly degrees apart in the azimuth and elevation directions. The sampling rate was 50 khz, the resolution of the data taken was 16 bits, and a 512-point FFT was used to compute the frequency response at each location. The data for this paper was taken at John C. Middlebrooks laboratory at the Kresge Hearing Research Institute of the University of Michigan. 2. SFRS S AND HRTF S 2.1 Traditional HRTF visualization techniques Many previous studies have attempted to visualize HRTF data by examining how certain macroscopic properties of HRTF sets, such as peaks and notches in particular locations of the magnitude frequency responses, associate and/or systematically vary with the perception of azimuth (left-right discrimination) and/or elevation (up-down discrimination) [1][4][5][9]. Typically, one attempts to see how these features change when azimuth changes and elevation is held constant, or vice-versa. For example, Figure 1 and Figure 2 present one subject s HRTF s in this style, where Figure 1 presents HRTF s in the horizontal plane, and Figure 2 presents HRTF s in the median plane. Cheng and Wakefield Page 1 of 13 Pre-print: AES16: Rovaniemi, Finland

2 Cheng and Wakefield Page 2 of 13 Pre-print: AES16: Rovaniemi, Finland Figure 1. Tradiditional -style visualization of several HRTF s in the horizontal plane (elevation 0). Although some patterns are evident in the raw HRTF magnitude frequency responses as azimuth changes, crosscomparison of HRTF data is difficult. Cheng and Wakefield Page 2 of 13 Pre-print: AES16: Rovaniemi, Finland

3 Cheng and Wakefield Page 3 of 13 Pre-print: AES16: Rovaniemi, Finland Figure 2. Tradiditional -style visualization of several HRTF s in the median plane (azimuth 0). Although some patterns are evident in the raw HRTF magnitude frequency responses as elevation changes, cross-comparison of HRTF data is difficult. Visualization using this technique does highlight some macroscopic features of HRTF s, as there are noticeable patterns to the changes in HRTF s as elevation and azimuth change continuously. For example, in Figure 1, one can see that the HRTF s on the contralateral side seem to be less smooth than those on the ipsilateral side, suggesting that SNR for ipsilateral HRTF s is generally higher than that for contralateral HRTF s. This agrees with the basic physics of head-shadowing. In Figure 2, one can see that there is a notch at 7 khz that migrates upwards in frequency as elevation increases. A shallow peak can be seen at 12 khz for lower elevations in the median plane, and this peak flattens out for higher elevations. However, interpretation of these graphs is fairly difficult, as comparison with many graphs is needed to generalize findings for different azimuths and elevations. It is difficult to draw cross-subject conclusions from these graphs as well, since the shapes of individualized HRTF s do not always match well [4]. 2.2 Constructing SFRS s from existing HRTF data SFRS s present the same HRTF magnitude response information, except in a different coordinate system. Specifically, one SFRS is constructed for every frequency bin in the HRTF left or right magnitude response, where magnitude is plotted against azimuth and elevation. In this manner, every HRTF is used in the computation of a single SFRS. Since the spatial sampling pattern is irregular, linear interpolation is used to construct a surface that approximates the continuous spatial frequency response of the ear at the specified frequency. Thus, the value of the SFRS for frequency f at the coordinate (θ,φ) represents how much power at frequency f the right or left ear receives at (θ,φ) compared with other locations. Figure 3 describes the procedure for computing a SFRS for a given ear and frequency bin, while Figure 4 and Figure 5 show representative SFRS s. Cheng and Wakefield Page 3 of 13 Pre-print: AES16: Rovaniemi, Finland

4 Cheng and Wakefield Page 4 of 13 Pre-print: AES16: Rovaniemi, Finland Individual HRTF s for all available spatial locations H H H H f f f Bin 72 = 7 khz f Azimuth 0, elevation 20 Azimuth 20, elevation 50 Azimuth -60, elevation -30 Azimuth 120, elevation 20 Spatial frequency response graph for bin 72 = 7 khz Azimuth Elevation Figure 3. Construction of SFRS s from HRTF s 3. QUALITATIVE ANALYSIS OF SFRS S 3.1 Cross-subject similarity of SFRS s The spatial frequency graphs are an immediate, visual, and very useful presentation of many qualitative properties of directional hearing. One can readily see general trends in these graphs across subjects which are harder to identify by analyzing raw magnitude frequency responses alone. Initial observations reveal that for a given frequency, different subjects SFRS s very similar to each other. The SFRS s do differ in their absolute magnitudes, but the general cross-subject features of the surfaces are the same. Table 1 summarizes these general, cross-subject observations for several frequencies, and highlights the important changes in the features as frequency increases from 1-13 khz. Figure 4 shows a single subject s SFRS s for several frequencies corresponding to each entry in Table 1. Figure 5 illustrates the cross-subject similarity of SFRS s among three subjects for two different frequencies. The most striking quality of these graphs is the location, and apparent motion of the hotspots, or well-defined peaks, of the SFRS s as a function of frequency. The hotspots well defined existence starting from 1-2 khz coincides with the well-known frequency of 1.5 khz, at which IID (interaural intensity difference) cues become more important for spatial perception than ITD (interaural time difference) cues in traditional duplex theory [1]. Figures 4e-t show that torso effects, predicted to be pronounced between 1-2 khz by the physical models [9], actually occur over a larger frequency range, though in a very limited area around the head. This can be seen by the shallow null on the contralateral side at lower elevations in many frequencies below 13 khz. Note also the symmetry of many of the hotspots about the horizontal plane. The hotspots in most graphs containing more than one hotspot are located on the same azimuth, and are separated by roughly equal elevations, as can be seen in Figures 4e-j. The only asymmetries in this respect occur between 6-8 khz and 8-10 khz in Figures 4o-r, where the magnitude of one of two salient peaks is clearly greater than that of the other. Frequencies Figure Description of corresponding SFRS s Hz 5a,b Roughly equal power is received from all directions. No salient features present..6-1 khz 5c,d Head shadowing can be seen, as the ipsilateral ear receives more energy than the contralateral ear. Diffraction effects due to the head can be seen on the contralateral side of the head, near ±100 degrees. 1-2 khz 5e,f Head shadowing becomes more prominent; diffraction effects are clearly seen on the contralateral side of the head. Two to three distinct peaks at ipsilateral azimuth 100, elevations +30 and 30 are starting to form. Attenuating effects of the torso can be seen on the contralateral side, lower elevations khz 5g,h Three peaks in the surface can be seen at ipsilateral azimuth 70-80, elevations 30, 10, and 50. Diffraction effects can still be seen on the contralateral side near contralateral azimuth 100. Torso effects can be seen khz 5i,j The three peaks on the ipsilateral side have moved closer to the median plane and slightly higher in elevation. A fourth peak is starting to form beneath the other three, at ipsilateral azimuth 40, elevation 50. Diffraction effects are starting to lessen, as the contralateral peak at contralateral azimuth 100 is beginning to fade. There are some nulls in the ipsilateral side, and torso effects can still be seen. 4-5 khz 5k,l The three peaks on the ipsilateral side have become one, large peak centered near ipsilateral azimuth 50, elevation 0. Diffraction effects are nearly gone, but torso effects can still be seen in lower elevations. 5-6 khz 5m,n The large ipsilateral hotspot has moved farther away from the median plane, and upwards in elevation. The spot is now at ipsilateral azimuth 75, elevation 20. Torso effects can still be seen. 6-8 khz 5o,p The single 5-6 khz peak has become two smaller peaks at ipsilateral azimuth 75, elevations 40, +40. Torso effects can still be seen. 8-10kHz 5q,r The two ipsilateral hotspots are still present, but the lower elevation peak is more prominent than the higher elevation peak. A third hotspot is beginning to form on the median plane at azimuth 0, elevation 30. Torso effects can still be seen kHz 5s,t Four hotspots are now apparent, one on the median plane at azimuth 0 elevation 20, and the other three at ipsilateral azimuth 100, elevations 40, 0, Torso effects can still be seen. Table 1. Summary of cross-subject similarity in SFRS s from 1-13 khz. Cheng and Wakefield Page 4 of 13 Pre-print: AES16: Rovaniemi, Finland

5 Cheng and Wakefield Page 5 of 13 Pre-print: AES16: Rovaniemi, Finland Each contour line represents a 1 db change in magnitude. Grayscale in db: Figure 4. Subject CC s SFRS s for several frequencies. See Table 1 for details. Cheng and Wakefield Page 5 of 13 Pre-print: AES16: Rovaniemi, Finland

6 Cheng and Wakefield Page 6 of 13 Pre-print: AES16: Rovaniemi, Finland Each contour line represents a 1 db change in magnitude. Grayscale in db: Figure 4 (continued). Subject CC s SFRS s for several frequencies. See Table 1 for details. Cheng and Wakefield Page 6 of 13 Pre-print: AES16: Rovaniemi, Finland

7 Cheng and Wakefield Page 7 of 13 Pre-print: AES16: Rovaniemi, Finland Each contour line represents a 1 db change in magnitude. Grayscale in db: Figure 5. Cross-subject comparison of SFRS s for three subjects at two different frequencies. Although there are some differences in absolute magnitudes among subjects, the general features of the SFRS s remain the same. Cheng and Wakefield Page 7 of 13 Pre-print: AES16: Rovaniemi, Finland

8 Cheng and Wakefield Page 8 of 13 Pre-print: AES16: Rovaniemi, Finland 3.2 Visualizing diffraction effects with SFRS s Given a sphere and an incident monotonic plane wave, it is a well-known fact that for some frequencies and incident angles, the sphere has an amplifying effect on the wave at certain points near the sphere [7]. Thus, in using a rigid spherical model of the head to calculate HRTF s analytically, there are some locations on the contralateral side of the head where this effect occurs, even though these locations lie in the direct shadow of the head. Shaw refers to this spot as a bright spot [9] in his analyses, and an analytical derivation of the effect can be found in [1]. Diffraction effects are clearly visible in SFRS s from 1-5 khz, and agree well with theoretical predictions. Shaw s bright spot can easily be interpreted in these SFRS s as hotspots or peaks in the surfaces directly opposite the contralateral ear (azimuth 90 for right ear SFRS s; azimuth +90 for left ear SFRS s). Figures 4c-h show this effect. 3.3 Using SFRS s to investigate the theory of directional bands The location of an SFRS s hotspots may also support some aspects of the theory of directional bands, which hypothesizes that certain narrowband sounds are correlated with perceptually preferred spatial directions. More specifically, an SFRS with one dominant hotspot may suggest that the auditory system favors the location of that hostpot perceptually when presented with narrowband noise centered at that frequency. For example, Figure 8 shows that the dominant hotspot at 6 khz is clearly positive in elevation, while the dominant hotspot at 8.5 khz is clearly negative in elevation. A psychophysical study designed to test the theory of directional bands for several frequencies found that subjects tended to localize narrowband sounds centered at 6 and 8 khz as originating from positive and negative elevations respectively, regardless of the actual free-field location of the source [6]. Therefore, comparison of the SFRS hotspots locatioons to Middlebrooks psychophysical data at these frequencies shows that in this limited case, there is a correlation of a dominant hotspot s spatial location with that frequency s perceptually preferred spatial directions. 4. APPLICATIONS OF SFRS S 4.1 Using SFRS s to detect HRTF measurement calibration errors SFRS s were also useful in detecting calibration problems in the HRTF measurement process. As shown in Figure 6, HRTF measurements used for this paper were taken at two locations for one movement of the test apparatus. Therefore, since two different speakers were used in presenting the stimuli, calibration of the two signal paths used in stimuli presentation was essential to obtaining accurate results. In another set of HRTF data recorded from Middlebrooks lab, subjects reported excessive trouble perceiving spatial location correctly when listening to sounds processed with their own personalized HRTF s (Wakefield, personal communication). Examination of these subjects data sets using SFRS s shows immediately that there was a calibration problem in the two different paths used for stimuli presentation. In Figure 8, simple visual inspection reveals that at a given frequency, the magnitude of one half of the data differs from the magnitude of the other half of the data by a constant value. Because of the geometry of the test apparatus, we can conclude that the gain of one signal path was greater than that of the other, and that therefore, the signal paths gains were not correctly calibrated. Possible set of apparatus trajectories Speaker for stimuli presentation Speaker for stimuli presentation Subject, with probe microphones inserted in right and left ear canals Figure 6. Geometry of HRTF data acquisition apparatus. two measurements were taken for one movement of the apparatus by using two speakers for stimuli presentation. The locations of the speakers are polar opposites on a fictional sphere. The bar connecting the two speakers has two degrees of freedom, corresponding to changes in azimuth and elevation. Each speaker recorded data from roughly one hemisphere of the domain space. Cheng and Wakefield Page 8 of 13 Pre-print: AES16: Rovaniemi, Finland

9 Cheng and Wakefield Page 9 of 13 Pre-print: AES16: Rovaniemi, Finland Each contour line represents a 1 db change in magnitude. Grayscale in db: Figure 7. Using SFRS s to support the theory of directional bands. In this figure, the dominant hotspots positive and negative elevations correspond well to preferred perceptual directions for 6 and 8 khz, respectively. Each contour line represents a 2.3 db change in magnitude. Grayscale in db: Figure 8. SFRS s from two subjects HRTF sets taken with a miscalibrated measurement apparatus. One half of the data clearly differs from the other half of the data by roughly a constant offset. 4.2 SFRS-based interpolation of HRTF s and perceptual evaluation The goal of synthesizing virtual auditory space is to perceptually place a sound at any location in free space. However, due to the complexity of HRTF measurement along with practical signal processing limitations, only a finite number of spatial locations are measured in practice. Therefore, the process of computing perceptually acceptable HRTF s for arbitrary spatial locations from existing data is of central importance for creating virtual auditory spaces. SFRS s suggest an alternative interpolation method that alleviates some of the limitations of other proposed methods. The present algorithm constructs a DTF magnitude response for a desired spatial location one frequency at a time, according to a weighted average of values taken directly from each SFRS. Therefore, this method allows different frequency components of different DTF s to contribute to the interpolated DTF to varying degrees, unlike simpler spatial methods [6]. No internal parameters, such as model order, need be predetermined, as is the case in pole-zero interpolation [2]. Cheng and Wakefield Page 9 of 13 Pre-print: AES16: Rovaniemi, Finland

10 Cheng and Wakefield Page 10 of 13 Pre-print: AES16: Rovaniemi, Finland Specifically, the algorithm first performs a triangulation of the azimuth elevation coordinate system in order to create a grid for the available, irregularly spaced data. The vertices of the triangulation are the locations at which DTF s are known. In order to minimize the effect of the irregularity in spatial sampling, interpolated locations are taken only from where the triangulation is most uniform. For each SFRS, a plane is constructed using the three magnitude response values associated with the three vertices of the triangle enclosing the desired spatial location. The interpolated value for that surface is taken as the value of the plane evaluated at the desired spatial location. This process is repeated for each SFRS, and each interpolated value is placed into the appropriate frequency bin of the interpolated magnitude DTF. The interpolation algorithm is shown in Figure 9. In order to test the perceptual validity of this interpolation algorithm, an informal psychophysical experiment was conducted involving two subjects whose HRTF s were measured. In this experiment, the subjects were presented with three sounds which either 1) shared the same approximate azimuth but differed in elevation by 10 (same azimuth testing), or 2) shared the same approximate elevation but differed in azimuth by 10 (same elevation testing). The subjects were then asked to select which of the three sounds appeared to lie spatially between the other two. In all trials, the leftmost, rightmost, highest elevation, and/or lowest elevation sounds were derived from DTF s at known locations. However, in one half of the trials, the middle sound was derived from an SFRS-interpolated DTF, while in the other half of the trials, the middle sound was derived from a non-interpolated DTF. The test shows spatial perceptions differ between interpolated and noninterpolated DTF s corresponding to the same spatial location. Furthermore, it reveals the extent to which perceptual accuracy differs in determining spatial location in the azimuth and elevation directions. In the listening task, subjects heard all three sounds once and were then allowed to hear each sound individually for as many times as needed to make a judgment. Each sound was approximately 750 ms in duration and was separated from other sounds by approximately ms of silence. The source sound for all trials was a 32k pseudo-random binary sequence. Sounds from interpolated DTF s were tested at 104 spatial locations. Of these 104 locations, 48 were taken from three different elevations (-30, 0, and 30 degrees) for same azimuth testing, and 56 were taken from eight different azimuths (0, 45, 90, 135, 180, 225, 270, and 315 degrees) for same elevation testing. Some locations were repeated among the same elevation and same azimuth tests. Ten trials were performed for each interpolation test condition, using both interpolated and noninterpolated DTF-derived middle conditions. In all, this experiment required hours of observation time and the subject was allowed to proceed at their own pace. Sounds were presented over headphones (Sennheiser 265) in a double-walled sound-proof booth. Test results for both subjects are shown in Figures 10-11, and are given in percentage correct responses for both subjects, where a correct response refers to correctly identifying the middle sound. In general, same-elevation testing was more successful than sameazimuth testing. For same-elevation conditions, both subjects performed better with interpolated, rather than non-interpolated, DTF s. Specifically, CC s performance was 94% (interpolated) and 87% (non-interpolated) and MM s performance was 93% (interpolated) and 83% (non-interpolated). For same-azimuth conditions, interpolated DTF s appeared to produce better performance than non-interpolated as well. In this case, CC s performance was 79% (interpolated) and 66% (non-interpolated) and MM s performance was 77% (interpolated) and 69% (non-interpolated). Plane containing all three values of the spatial frequency response evaluated at the vertices of the triangle SFRS for bin 0 SFRS for bin 1 SFRS for bin 2 SFRS for bin 256 Value of spatial frequency response at interpolated location Desired location SFRS for bin 2 azimuth Figure 9. Using SFRS s to compute interpolated HRTF s. Value of spatial frequency response at known location Vertices of the triangle are locations at which spatial frequency response is exactly known via measurements elevation Cheng and Wakefield Page 10 of 13 Pre-print: AES16: Rovaniemi, Finland

11 Cheng and Wakefield Page 11 of 13 Pre-print: AES16: Rovaniemi, Finland Figure 10. Perceptual evaluation of interpolated and non-interpolated HRTF s for subject CC. Same elevations and same azimuths testing for interpolated and non-interpolated HRTF s. Each shaded box represents a perceptual test location. Cheng and Wakefield Page 11 of 13 Pre-print: AES16: Rovaniemi, Finland

12 Cheng and Wakefield Page 12 of 13 Pre-print: AES16: Rovaniemi, Finland Figure 11. Perceptual evaluation of interpolated and non-interpolated HRTF s for subject MM. Same elevations and same azimuths testing for interpolated and non-interpolated HRTF s. Each shaded box represents a perceptual test location. Cheng and Wakefield Page 12 of 13 Pre-print: AES16: Rovaniemi, Finland

13 Cheng and Wakefield Page 13 of 13 Pre-print: AES16: Rovaniemi, Finland The difference in overall performance between sameelevation and same-azimuth testing is consistent with measures of the just-noticeable difference (JND) in elevation or azimuth of a given source [1]. For any given spatial location, listeners, in general, show greater acuity for changes in azimuth than changes in elevation. For same-elevation testing, there were a larger number of errors at azimuths of 90 and +90 degrees. These results are consistent with the fact that in general, the just-noticeable difference (JND) for azimuth is smaller than the JND for elevation [1]. In contrast, there is less of a clear pattern of errors for same-azimuth testing. In general, errors using interpolated DTF s occur in roughly the same spatial locations as those using noninterpolated DTF s, but there appears to be little clustering in their locations. It is surprising that listeners perform better using interpolated, rather than non-interpolated, DTF s. The source of this effect is not exactly known; however, one possible explanation is that subjects develop memory for the spectral coloration associated with a certain region of space, and learn to respond on to this cue alone. For example, both subjects reported that pitch cues could be used for elevation discrimination in the same azimuth tests. Another important result is that localization with exact DTF s is far from 100%, even in the absence of interpolation. These results are often reported in the literature and may reflect errors in HRTF measurement and DTF computation [10]. 5. CONCLUSIONS AND FUTURE DIRECTIONS This paper presented spatial frequency response surfaces (SFRS s) as an alternative, spatial-domain visualization tool for existing head-related transfer function (HRTF) data. The cross-subject similarity of SFRS representations allowed several cross-subject patterns in HRTF data to be seen quickly, such as diffraction and attenuation effects due to head shadowing and the torso, respectively. SFRS s were useful in detecting gross calibration errors in the HRTF measurement process. The structure of SFRS s suggested a spatial, SFRS-based HRTF interpolation algorithm, and a psychophysical experiment was performed to compare the perceptual quality of SFRS-interpolated HRTF s and noninterpolated HRTF s. This experiment showed that the interpolation algorithm produced interpolated HRTF s with perceptual quality comparable to exact HRTF s. Future directions include more formal psychophysical testing of the interpolation algorithm for smaller angles of separation; principal components analyses of SFRS s for parameter reduction; and parametrizations of the hotspots in selected SFRS s with a beamformer model. 6. ACKNOWLEDGMENTS The authors thank Dr. John C. Middlebrooks at the Kresge Hearing Research Institute of the University of Michigan for providing the data used in this research. We also thank Dr. Michael A. Blommer for his early investigations of the interpolation problem. This research was supported by a grant from the Office of Naval Research in collaboration with the Naval Submarine Medical Research Laboratory, Groton, Connecticut. 7. REFERENCES [1] Blauert, Jens. Spatial Hearing. The MIT Press, Cambridge: [2] Blommer, A. and Wakefield, G. A comparison of head related transfer function interpolation methods IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics. (IEEE catalog number: 95TH8144). [3] Cheng, Corey I. and Wakefield, Gregory H. Spatial Frequency Response Surfaces: An Alternative Visualization Tool for Head-Related Transfer Functions (HRTF s) IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP99). Phoenix, Arizona: [4] Kendall, Gary S. A 3-D sound primer: Directional hearing and stereo reproduction. Computer Music Journal, 19(4): Winter [5] Kistler, Doris J. and Wightman, Frederic L. A model of head-related transfer functions based on principal components analysis and minimum-phase reconstruction. Journal of the Acoustical Society of America, 91(3): March [6] Middlebrooks, John C. Narrow-band sound localization related to external ear acoustics. Journal of the Acoustical Society of America, 92(5): November [7] Morse, Philip M. Vibration and Sound. McGraw- Hill Book Company, Inc., New York: [8] Rao, K. Raghunath and Ben-Arie, Jezekiel. Optimal head related transfer functions for hearing and monaural localization in elevation: A signal processing design perspective. IEEE Transactions on Biomedical Engineering, 43(11): November [9] Shaw, E.A.G. The External Ear. Handbook of Sensory Physiology V/1: Auditory System, Anatomy Physiology(Ear). Springer-Verlag, New York: [10] Wightman, Frederic L. and Kistler, Doris J. Headphone simulation of free-field listening. I: Stimulus synthesis. Journal of the Acoustical Society of America, 85(2): February Cheng and Wakefield Page 13 of 13 Pre-print: AES16: Rovaniemi, Finland

Horizontal plane HRTF reproduction using continuous Fourier-Bessel functions

Horizontal plane HRTF reproduction using continuous Fourier-Bessel functions Horizontal plane HRTF reproduction using continuous Fourier-Bessel functions Wen Zhang,2, Thushara D. Abhayapala,2, Rodney A. Kennedy Department of Information Engineering, Research School of Information

More information

Measurement of pinna flare angle and its effect on individualized head-related transfer functions

Measurement of pinna flare angle and its effect on individualized head-related transfer functions PROCEEDINGS of the 22 nd International Congress on Acoustics Free-Field Virtual Psychoacoustics and Hearing Impairment: Paper ICA2016-53 Measurement of pinna flare angle and its effect on individualized

More information

Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM)

Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM) Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM) Manan Joshi *1, Navarun Gupta 1, and Lawrence V. Hmurcik 1 1 University of Bridgeport, Bridgeport, CT *Corresponding

More information

Introduction to HRTFs

Introduction to HRTFs Introduction to HRTFs http://www.umiacs.umd.edu/users/ramani ramani@umiacs.umd.edu How do we perceive sound location? Initial idea: Measure attributes of received sound at the two ears Compare sound received

More information

AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN

AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN Durand R. Begault, Ph.D. San José State University Flight Management and Human Factors Research Division NASA Ames Research

More information

APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION.

APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION. APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION Martin Pollow Institute of Technical Acoustics RWTH Aachen University Neustraße

More information

Visualizing diffraction of a loudspeaker enclosure

Visualizing diffraction of a loudspeaker enclosure Visualizing diffraction of a loudspeaker enclosure V. Pulkki T. Lokki Laboratory of Acoustics and Audio Signal Processing Telecommunications Software and Multimedia Laboratory Helsinki University of Technology,

More information

Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM)

Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM) Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM) Manan Joshi Navarun Gupta, Ph. D. Lawrence Hmurcik, Ph. D. University of Bridgeport, Bridgeport, CT Objective Measure

More information

REAL-TIME BINAURAL AUDIO RENDERING IN THE NEAR FIELD

REAL-TIME BINAURAL AUDIO RENDERING IN THE NEAR FIELD Proceedings of the SMC 29-6th Sound and Music Computing Conference, 23-2 July 29, Porto - Portugal REAL-TIME BINAURAL AUDIO RENDERING IN THE NEAR FIELD Simone Spagnol University of Padova spagnols@dei.unipd.it

More information

Surrounded by High-Definition Sound

Surrounded by High-Definition Sound Surrounded by High-Definition Sound Dr. ChingShun Lin CSIE, NCU May 6th, 009 Introduction What is noise? Uncertain filters Introduction (Cont.) How loud is loud? (Audible: 0Hz - 0kHz) Introduction (Cont.)

More information

Parametric Coding of Spatial Audio

Parametric Coding of Spatial Audio Parametric Coding of Spatial Audio Ph.D. Thesis Christof Faller, September 24, 2004 Thesis advisor: Prof. Martin Vetterli Audiovisual Communications Laboratory, EPFL Lausanne Parametric Coding of Spatial

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

An initial Investigation into HRTF Adaptation using PCA

An initial Investigation into HRTF Adaptation using PCA An initial Investigation into HRTF Adaptation using PCA IEM Project Thesis Josef Hölzl Supervisor: Dr. Georgios Marentakis Graz, July 212 institut für elektronische musik und akustik Abstract Recent research

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 13 Audio Signal Processing 14/04/01 http://www.ee.unlv.edu/~b1morris/ee482/

More information

An auditory localization model based on. high frequency spectral cues. Dibyendu Nandy and Jezekiel Ben-Arie

An auditory localization model based on. high frequency spectral cues. Dibyendu Nandy and Jezekiel Ben-Arie An auditory localization model based on high frequency spectral cues Dibyendu Nandy and Jezekiel Ben-Arie Department of Electrical Engineering and Computer Science The University of Illinois at Chicago

More information

Proposal and Evaluation of Mapping Hypermedia to 3D Sound Space

Proposal and Evaluation of Mapping Hypermedia to 3D Sound Space Proposal and Evaluation of Mapping Hypermedia to 3D Sound Space Radoslav Buranský Faculty of Mathematics, Physics and Informatics Comenuis University, Bratislava Abstract In this paper we present a new

More information

Audio-coding standards

Audio-coding standards Audio-coding standards The goal is to provide CD-quality audio over telecommunications networks. Almost all CD audio coders are based on the so-called psychoacoustic model of the human auditory system.

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Parametric Model Of Head Related Transfer Functions Based On Systematic Movements Of Poles And Zeros With Sound Location For Pole/ zero Models

Parametric Model Of Head Related Transfer Functions Based On Systematic Movements Of Poles And Zeros With Sound Location For Pole/ zero Models University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 1-1-2009 Parametric Model Of Head Related Transfer Functions Based On Systematic Movements Of Poles And Zeros

More information

Room Acoustics. CMSC 828D / Spring 2006 Lecture 20

Room Acoustics. CMSC 828D / Spring 2006 Lecture 20 Room Acoustics CMSC 828D / Spring 2006 Lecture 20 Lecture Plan Room acoustics basics Structure of room impulse response Characterization of room acoustics Modeling of reverberant response Basics All our

More information

The FABIAN head-related transfer function data base

The FABIAN head-related transfer function data base The FABIAN head-related transfer function data base Fabian Brinkmann, Alexander Lindau, Stefan Weinzierl TU Berlin, Audio Communication Group Einsteinufer 17c, 10587 Berlin-Germany Gunnar Geissler, Steven

More information

Digital Media. Daniel Fuller ITEC 2110

Digital Media. Daniel Fuller ITEC 2110 Digital Media Daniel Fuller ITEC 2110 Daily Question: Digital Audio What values contribute to the file size of a digital audio file? Email answer to DFullerDailyQuestion@gmail.com Subject Line: ITEC2110-09

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Capturing, Computing, Visualizing and Recreating Spatial Sound

Capturing, Computing, Visualizing and Recreating Spatial Sound Capturing, Computing, Visualizing and Recreating Spatial Sound Ramani Duraiswami University of Maryland, College Park Joint work with Dmitry Zotkin, Zhiyun Li, Elena Grassi, Adam O Donovan, Nail Gumerov,

More information

Inverse Filter Design for Crosstalk Cancellation in Portable Devices with Stereo Loudspeakers

Inverse Filter Design for Crosstalk Cancellation in Portable Devices with Stereo Loudspeakers Inverse Filter Design for Crosstalk Cancellation in Portable Devices with Stereo Loudspeakers Sung Dong Jo 1 and Seung o Choi 2,* 1 School of Information and Communications Gwangju Institute of Science

More information

AUDIOVISUAL COMMUNICATION

AUDIOVISUAL COMMUNICATION AUDIOVISUAL COMMUNICATION Laboratory Session: Audio Processing and Coding The objective of this lab session is to get the students familiar with audio processing and coding, notably psychoacoustic analysis

More information

Audio-coding standards

Audio-coding standards Audio-coding standards The goal is to provide CD-quality audio over telecommunications networks. Almost all CD audio coders are based on the so-called psychoacoustic model of the human auditory system.

More information

A Head-Related Transfer Function Model for Real-Time Customized 3-D Sound Rendering

A Head-Related Transfer Function Model for Real-Time Customized 3-D Sound Rendering 211 Seventh International Conference on Signal Image Technology & Internet-Based Systems A Head-Related Transfer Function Model for Real-Time Customized 3-D Sound Rendering Michele Geronazzo Simone Spagnol

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.910 Topics in Linguistic Theory: Laboratory Phonology Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

25-1 Interference from Two Sources

25-1 Interference from Two Sources 25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other

More information

Introducing Audio Signal Processing & Audio Coding. Dr Michael Mason Snr Staff Eng., Team Lead (Applied Research) Dolby Australia Pty Ltd

Introducing Audio Signal Processing & Audio Coding. Dr Michael Mason Snr Staff Eng., Team Lead (Applied Research) Dolby Australia Pty Ltd Introducing Audio Signal Processing & Audio Coding Dr Michael Mason Snr Staff Eng., Team Lead (Applied Research) Dolby Australia Pty Ltd Introducing Audio Signal Processing & Audio Coding 2013 Dolby Laboratories,

More information

AUDIOVISUAL COMMUNICATION

AUDIOVISUAL COMMUNICATION AUDIOVISUAL COMMUNICATION Laboratory Session: Audio Processing and Coding The objective of this lab session is to get the students familiar with audio processing and coding, notably psychoacoustic analysis

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.6 CHARACTERIZING

More information

Metrics for performance assessment of mixed-order Ambisonics spherical microphone arrays

Metrics for performance assessment of mixed-order Ambisonics spherical microphone arrays Downloaded from orbit.dtu.dk on: Oct 6, 28 Metrics for performance assessment of mixed-order Ambisonics spherical microphone arrays Favrot, Sylvain Emmanuel; Marschall, Marton Published in: Proceedings

More information

Diffraction Efficiency

Diffraction Efficiency Diffraction Efficiency Turan Erdogan Gratings are based on diffraction and interference: Diffraction gratings can be understood using the optical principles of diffraction and interference. When light

More information

ijdsp Interactive Illustrations of Speech/Audio Processing Concepts

ijdsp Interactive Illustrations of Speech/Audio Processing Concepts ijdsp Interactive Illustrations of Speech/Audio Processing Concepts NSF Phase 3 Workshop, UCy Presentation of an Independent Study By Girish Kalyanasundaram, MS by Thesis in EE Advisor: Dr. Andreas Spanias,

More information

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked Plotting Menu: QCExpert Plotting Module graphs offers various tools for visualization of uni- and multivariate data. Settings and options in different types of graphs allow for modifications and customizations

More information

CAUTION: NEVER LOOK DIRECTLY INTO THE LASER BEAM.

CAUTION: NEVER LOOK DIRECTLY INTO THE LASER BEAM. LABORATORY 12 PHYSICAL OPTICS I: INTERFERENCE AND DIFFRACTION Objectives To be able to explain demonstrate understanding of the dependence of a double slit interference pattern on slit width, slit separation

More information

Loudspeaker Complex Directional Response Characterization

Loudspeaker Complex Directional Response Characterization Loudspeaker Complex Directional Response Characterization William R. Hoy and Charles McGregor Eastern Acoustic Works, Inc. 1 Main St., Whitinsville, MA 01588 PH: 508-234-6158 FAX: 508-234-6479 e-mail:

More information

Spherical Microphone Arrays

Spherical Microphone Arrays Spherical Microphone Arrays Acoustic Wave Equation Helmholtz Equation Assuming the solutions of wave equation are time harmonic waves of frequency ω satisfies the homogeneous Helmholtz equation: Boundary

More information

Measurement of Spatial Impulse Responses with a Soundfield Microphone. Robert Essert. Artec Consultants Inc 114 W 26 ST, New York, NY 10001

Measurement of Spatial Impulse Responses with a Soundfield Microphone. Robert Essert. Artec Consultants Inc 114 W 26 ST, New York, NY 10001 Measurement of Spatial Impulse Responses with a Soundfield Microphone Robert Essert Artec Consultants Inc 114 W 26 ST, New York, NY 10001 Now at Sound Space Design 2 St George's Court, 131 Putney Bridge

More information

BCC Sphere Transition

BCC Sphere Transition BCC Sphere Transition The Sphere Transition shape models the source image onto a sphere. Unlike the Sphere filter, the Sphere Transition filter allows you to animate Perspective, which is useful in creating

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Engineering Acoustics Session 2pEAb: Controlling Sound Quality 2pEAb1. Subjective

More information

I. INTRODUCTION A. Variation of the HRTF with elevation. Electronic mail:

I. INTRODUCTION A. Variation of the HRTF with elevation. Electronic mail: Approximating the head-related transfer function using simple geometric models of the head and torso V. Ralph Algazi and Richard O. Duda a) CIPIC, Center for Image Processing and Integrated Computing,

More information

System Identification Related Problems at

System Identification Related Problems at media Technologies @ Ericsson research (New organization Taking Form) System Identification Related Problems at MT@ER Erlendur Karlsson, PhD 1 Outline Ericsson Publications and Blogs System Identification

More information

Plane Wave Imaging Using Phased Array Arno Volker 1

Plane Wave Imaging Using Phased Array Arno Volker 1 11th European Conference on Non-Destructive Testing (ECNDT 2014), October 6-10, 2014, Prague, Czech Republic More Info at Open Access Database www.ndt.net/?id=16409 Plane Wave Imaging Using Phased Array

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

For Mac and iphone. James McCartney Core Audio Engineer. Eric Allamanche Core Audio Engineer

For Mac and iphone. James McCartney Core Audio Engineer. Eric Allamanche Core Audio Engineer For Mac and iphone James McCartney Core Audio Engineer Eric Allamanche Core Audio Engineer 2 3 James McCartney Core Audio Engineer 4 Topics About audio representation formats Converting audio Processing

More information

Extracting the frequencies of the pinna spectral notches in measured head related impulse responses

Extracting the frequencies of the pinna spectral notches in measured head related impulse responses Extracting the frequencies of the pinna spectral notches in measured head related impulse responses Vikas C. Raykar a) and Ramani Duraiswami b) Perceptual Interfaces and Reality Laboratory, Institute for

More information

Chapter 5. Track Geometry Data Analysis

Chapter 5. Track Geometry Data Analysis Chapter Track Geometry Data Analysis This chapter explains how and why the data collected for the track geometry was manipulated. The results of these studies in the time and frequency domain are addressed.

More information

Introducing Audio Signal Processing & Audio Coding. Dr Michael Mason Senior Manager, CE Technology Dolby Australia Pty Ltd

Introducing Audio Signal Processing & Audio Coding. Dr Michael Mason Senior Manager, CE Technology Dolby Australia Pty Ltd Introducing Audio Signal Processing & Audio Coding Dr Michael Mason Senior Manager, CE Technology Dolby Australia Pty Ltd Overview Audio Signal Processing Applications @ Dolby Audio Signal Processing Basics

More information

Individualized Head-Related Transfer Functions: Efficient Modeling and Estimation from Small Sets of Spatial Samples

Individualized Head-Related Transfer Functions: Efficient Modeling and Estimation from Small Sets of Spatial Samples Individualized Head-Related Transfer Functions: Efficient Modeling and Estimation from Small Sets of Spatial Samples Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Analyzing keyboard noises

Analyzing keyboard noises 08/17 Introduction 1 Recording keyboard noises 2 3 Sound pressure level vs. time analysis 3 Loudness vs. time 4 Sharpness vs. time 4 Sound design by means of Playback Filters 5 Summary 7 Note 7 Introduction

More information

JitKit. Operator's Manual

JitKit. Operator's Manual JitKit Operator's Manual March, 2011 LeCroy Corporation 700 Chestnut Ridge Road Chestnut Ridge, NY, 10977-6499 Tel: (845) 578-6020, Fax: (845) 578 5985 Internet: www.lecroy.com 2011 by LeCroy Corporation.

More information

HRTF MAGNITUDE MODELING USING A NON-REGULARIZED LEAST-SQUARES FIT OF SPHERICAL HARMONICS COEFFICIENTS ON INCOMPLETE DATA

HRTF MAGNITUDE MODELING USING A NON-REGULARIZED LEAST-SQUARES FIT OF SPHERICAL HARMONICS COEFFICIENTS ON INCOMPLETE DATA HRTF MAGNITUDE MODELING USING A NON-REGULARIZED LEAST-SQUARES FIT OF SPHERICAL HARMONICS COEFFICIENTS ON INCOMPLETE DATA Jens Ahrens, Mark R. P. Thomas, Ivan Tashev Microsoft Research, One Microsoft Way,

More information

Principles of Audio Coding

Principles of Audio Coding Principles of Audio Coding Topics today Introduction VOCODERS Psychoacoustics Equal-Loudness Curve Frequency Masking Temporal Masking (CSIT 410) 2 Introduction Speech compression algorithm focuses on exploiting

More information

Basic features. Adding audio files and tracks

Basic features. Adding audio files and tracks Audio in Pictures to Exe Introduction In the past the conventional wisdom was that you needed a separate audio editing program to produce the soundtrack for an AV sequence. However I believe that PTE (Pictures

More information

Physics 1CL WAVE OPTICS: INTERFERENCE AND DIFFRACTION Fall 2009

Physics 1CL WAVE OPTICS: INTERFERENCE AND DIFFRACTION Fall 2009 Introduction An important property of waves is interference. You are familiar with some simple examples of interference of sound waves. This interference effect produces positions having large amplitude

More information

Measurement of 3D Room Impulse Responses with a Spherical Microphone Array

Measurement of 3D Room Impulse Responses with a Spherical Microphone Array Measurement of 3D Room Impulse Responses with a Spherical Microphone Array Jean-Jacques Embrechts Department of Electrical Engineering and Computer Science/Acoustic lab, University of Liège, Sart-Tilman

More information

An Accurate Method for Skew Determination in Document Images

An Accurate Method for Skew Determination in Document Images DICTA00: Digital Image Computing Techniques and Applications, 1 January 00, Melbourne, Australia. An Accurate Method for Skew Determination in Document Images S. Lowther, V. Chandran and S. Sridharan Research

More information

Fundamentals of Perceptual Audio Encoding. Craig Lewiston HST.723 Lab II 3/23/06

Fundamentals of Perceptual Audio Encoding. Craig Lewiston HST.723 Lab II 3/23/06 Fundamentals of Perceptual Audio Encoding Craig Lewiston HST.723 Lab II 3/23/06 Goals of Lab Introduction to fundamental principles of digital audio & perceptual audio encoding Learn the basics of psychoacoustic

More information

Room Acoustics Computer Modeling: Study of the Effect of Source Directivity on Auralizations

Room Acoustics Computer Modeling: Study of the Effect of Source Directivity on Auralizations University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Architectural Engineering -- Faculty Publications Architectural Engineering 3-2006 Room Acoustics Computer Modeling: Study

More information

Physics 1C DIFFRACTION AND INTERFERENCE Rev. 2-AH. Introduction

Physics 1C DIFFRACTION AND INTERFERENCE Rev. 2-AH. Introduction Introduction The material for this chapter is discussed in Hecht, Chapter 25. Light exhibits many of the properties of a transverse wave. Waves that overlap with other waves can reinforce each other or

More information

Anatomy of common scatterpoint (CSP) gathers formed during equivalent offset prestack migration (EOM)

Anatomy of common scatterpoint (CSP) gathers formed during equivalent offset prestack migration (EOM) Anatomy of CSP gathers Anatomy of common scatterpoint (CSP) gathers formed during equivalent offset prestack migration (EOM) John C. Bancroft and Hugh D. Geiger SUMMARY The equivalent offset method of

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Perceptual coding. A psychoacoustic model is used to identify those signals that are influenced by both these effects.

Perceptual coding. A psychoacoustic model is used to identify those signals that are influenced by both these effects. Perceptual coding Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal. Perceptual encoders, however, have been designed for the compression of general

More information

Light and the Properties of Reflection & Refraction

Light and the Properties of Reflection & Refraction Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction

More information

Angle Gathers for Gaussian Beam Depth Migration

Angle Gathers for Gaussian Beam Depth Migration Angle Gathers for Gaussian Beam Depth Migration Samuel Gray* Veritas DGC Inc, Calgary, Alberta, Canada Sam Gray@veritasdgc.com Abstract Summary Migrated common-image-gathers (CIG s) are of central importance

More information

Activity 9.1 The Diffraction Grating

Activity 9.1 The Diffraction Grating PHY385H1F Introductory Optics Practicals Day 9 Diffraction November 29, 2010 Please work in a team of 3 or 4 students. All members should find a way to contribute. Two members have a particular role, and

More information

System Identification Related Problems at SMN

System Identification Related Problems at SMN Ericsson research SeRvices, MulTimedia and Network Features System Identification Related Problems at SMN Erlendur Karlsson SysId Related Problems @ ER/SMN Ericsson External 2016-05-09 Page 1 Outline Research

More information

AP Physics Problems -- Waves and Light

AP Physics Problems -- Waves and Light AP Physics Problems -- Waves and Light 1. 1975-4 (Physical Optics) a. Light of a single wavelength is incident on a single slit of width w. (w is a few wavelengths.) Sketch a graph of the intensity as

More information

AUDIO SIGNAL PROCESSING FOR NEXT- GENERATION MULTIMEDIA COMMUNI CATION SYSTEMS

AUDIO SIGNAL PROCESSING FOR NEXT- GENERATION MULTIMEDIA COMMUNI CATION SYSTEMS AUDIO SIGNAL PROCESSING FOR NEXT- GENERATION MULTIMEDIA COMMUNI CATION SYSTEMS Edited by YITENG (ARDEN) HUANG Bell Laboratories, Lucent Technologies JACOB BENESTY Universite du Quebec, INRS-EMT Kluwer

More information

Voice Analysis for Mobile Networks

Voice Analysis for Mobile Networks White Paper VIAVI Solutions Voice Analysis for Mobile Networks Audio Quality Scoring Principals for Voice Quality of experience analysis for voice... 3 Correlating MOS ratings to network quality of service...

More information

THE EFFECT OF EARLY REFLECTIONS ON PERCEIVED TIMBRE ANALYZED WITH AN AUDITORY MODEL. Tapio Lokki, Ville Pulkki, and Lauri Savioja

THE EFFECT OF EARLY REFLECTIONS ON PERCEIVED TIMBRE ANALYZED WITH AN AUDITORY MODEL. Tapio Lokki, Ville Pulkki, and Lauri Savioja Proceedings of the 22 International Conference on Auditory Display, Kyoto, Japan, July 2-, 22 THE EFFECT OF EARLY REFLECTIONS ON PERCEIVED TIMBRE ANALYZED WITH AN AUDITORY MODEL Tapio Lokki, Ville Pulkki,

More information

Comparison of Spatial Audio Simulation Systems

Comparison of Spatial Audio Simulation Systems Comparison of Spatial Audio Simulation Systems Vladimír Arnošt arnost@fit.vutbr.cz Filip Orság orsag@fit.vutbr.cz Abstract: Modelling and simulation of spatial (3D) sound propagation in real-time applications

More information

3D Audio Perception System for Humanoid Robots

3D Audio Perception System for Humanoid Robots 3D Audio Perception System for Humanoid Robots Norbert Schmitz, Carsten Spranger, Karsten Berns University of Kaiserslautern Robotics Research Laboratory Kaiserslautern D-67655 {nschmitz,berns@informatik.uni-kl.de

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal.

Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal. Perceptual coding Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal. Perceptual encoders, however, have been designed for the compression of general

More information

Overview Introduction A Brief CBT History and Overview Steps to Create a Ground-Plane Array from a Free-Standing Array Sound-Field Simulations Measure

Overview Introduction A Brief CBT History and Overview Steps to Create a Ground-Plane Array from a Free-Standing Array Sound-Field Simulations Measure Ground-Plane Constant Beamwidth Transducer (CBT) Loudspeaker Circular-Arc Line Arrays D. B. (Don) Keele, Jr. Principle Consultant DBK Associates and Labs Bloomington, IN, USA www.dbkeele.com (AES Paper

More information

Comparison Between Scattering Coefficients Determined By Specimen Rotation And By Directivity Correlation

Comparison Between Scattering Coefficients Determined By Specimen Rotation And By Directivity Correlation Comparison Between Scattering Coefficients Determined By Specimen Rotation And By Directivity Correlation Tetsuya Sakuma, Yoshiyuki Kosaka Institute of Environmental Studies, University of Tokyo 7-3-1

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

CSIT 691 Independent Project

CSIT 691 Independent Project CSIT 691 Independent Project A comparison of Mean Average Error (MAE) Based Image Search for Hexagonally and Regularly Structured Pixel Data Student: Sijing LIU Email: sijing@ust.hk Supervisor: Prof. David

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Optimal Positioning of a Binaural Sensor on a Humanoid Head for Sound Source Localization

Optimal Positioning of a Binaural Sensor on a Humanoid Head for Sound Source Localization Optimal Positioning of a Binaural Sensor on a Humanoid Head for Sound Source Localization Alain Skaf Patrick Danès CNRS ; LAAS ; 7 avenue du colonel Roche, F-31077 Toulouse Cedex 4, France Université de

More information

SCAN & PAINT 3D. Product Leaflet NEW PRODUCT. icroflown Technologies Charting sound fields MICROFLOWN // CHARTING SOUND FIELDS

SCAN & PAINT 3D. Product Leaflet NEW PRODUCT. icroflown Technologies Charting sound fields MICROFLOWN // CHARTING SOUND FIELDS icroflown Technologies Charting sound fields Product Leaflet SCAN & NEW PRODUCT PAINT 3D Microflown Technologies Tivolilaan 205 6824 BV Arnhem The Netherlands Phone : +31 088 0010800 Fax : +31 088 0010810

More information

3 Graphical Displays of Data

3 Graphical Displays of Data 3 Graphical Displays of Data Reading: SW Chapter 2, Sections 1-6 Summarizing and Displaying Qualitative Data The data below are from a study of thyroid cancer, using NMTR data. The investigators looked

More information

Sound Lab by Gold Line. Noise Level Analysis

Sound Lab by Gold Line. Noise Level Analysis Sound Lab by Gold Line Noise Level Analysis About NLA The NLA module of Sound Lab is used to analyze noise over a specified period of time. NLA can be set to run for as short as one minute or as long as

More information

Mpeg 1 layer 3 (mp3) general overview

Mpeg 1 layer 3 (mp3) general overview Mpeg 1 layer 3 (mp3) general overview 1 Digital Audio! CD Audio:! 16 bit encoding! 2 Channels (Stereo)! 44.1 khz sampling rate 2 * 44.1 khz * 16 bits = 1.41 Mb/s + Overhead (synchronization, error correction,

More information

Evaluation of a new Ambisonic decoder for irregular loudspeaker arrays using interaural cues

Evaluation of a new Ambisonic decoder for irregular loudspeaker arrays using interaural cues 3rd International Symposium on Ambisonics & Spherical Acoustics@Lexington, Kentucky, USA, 2nd June 2011 Evaluation of a new Ambisonic decoder for irregular loudspeaker arrays using interaural cues J. Treviño

More information

Stray light calculation methods with optical ray trace software

Stray light calculation methods with optical ray trace software Stray light calculation methods with optical ray trace software Gary L. Peterson Breault Research Organization 6400 East Grant Road, Suite 350, Tucson, Arizona 85715 Copyright 1999, Society of Photo-Optical

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

ESTIMATION OF MULTIPATH PROPAGATION DELAYS AND INTERAURAL TIME DIFFERENCES FROM 3-D HEAD SCANS. Hannes Gamper, Mark R. P. Thomas, Ivan J.

ESTIMATION OF MULTIPATH PROPAGATION DELAYS AND INTERAURAL TIME DIFFERENCES FROM 3-D HEAD SCANS. Hannes Gamper, Mark R. P. Thomas, Ivan J. ESTIMATION OF MULTIPATH PROPAGATION DELAYS AND INTERAURAL TIME DIFFERENCES FROM 3-D HEAD SCANS Hannes Gamper, Mark R. P. Thomas, Ivan J. Tashev Microsoft Research, Redmond, WA 98052, USA ABSTRACT The estimation

More information

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES 17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES The Current Building Codes Use the Terminology: Principal Direction without a Unique Definition 17.1 INTRODUCTION { XE "Building Codes" }Currently

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

CS770/870 Spring 2017 Color and Shading

CS770/870 Spring 2017 Color and Shading Preview CS770/870 Spring 2017 Color and Shading Related material Cunningham: Ch 5 Hill and Kelley: Ch. 8 Angel 5e: 6.1-6.8 Angel 6e: 5.1-5.5 Making the scene more realistic Color models representing the

More information

Spectral Images and the Retinex Model

Spectral Images and the Retinex Model Spectral Images and the Retine Model Anahit Pogosova 1, Tuija Jetsu 1, Ville Heikkinen 2, Markku Hauta-Kasari 1, Timo Jääskeläinen 2 and Jussi Parkkinen 1 1 Department of Computer Science and Statistics,

More information

From Shapes to Sounds: A perceptual mapping

From Shapes to Sounds: A perceptual mapping From Shapes to Sounds: A perceptual mapping Vikas Chandrakant Raykar vikas@umiacs.umd.edu Abstract In this report we present a perceptually inspired mapping to convert a simple two dimensional image consisting

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

A Verification Study of ABAQUS AC3D8R Elements for Acoustic Wave Propagation

A Verification Study of ABAQUS AC3D8R Elements for Acoustic Wave Propagation A Verification Study of ABAQUS AC3D8R Elements for Acoustic Wave Propagation by Michael Robert Hubenthal A Project Submitted to the Graduate Faculty of Rensselaer Polytechnic Institute in Partial Fulfillment

More information