Institutionen för datavetenskap

Size: px
Start display at page:

Download "Institutionen för datavetenskap"

Transcription

1 Institutionen för datavetenskap Department of Computer and Information Science Master thesis Advanced Real Time sound techniques for Virtual Reality headsets by Johan Yngman and Emil Westergren Linköping Civilingenjör Datateknik ISRN: LIU IDA/LITH EX A 14/014 SE Advisor: Anders Fröberg 1

2 Abstract Virtual reality headsets, like the Oculus Rift, uses visual tricks to enhance the feeling of immersion in virtual reality applications. This thesis explores the possibilities of creating an immersive sound experience to go along with the visuals. A sound propagation model designed for real time simulations is presented along with techniques to implement binaural synthesis. The thesis includes an implementation of a 3D cube world written in C# using OpenGL for graphics and FMOD for sound. The implementation is evaluated in the context of realism and possible improvements and optimizations are suggested. 2

3 Table of contents Abstract Table of contents Background Purpose and problem statement Theory What is a Virtual Reality (VR) headset? Binaural sound What is Binaural sound? Head related impulse response (HRIR) Head related transfer functions (HRTF) Virtual binaural sound effect Model for sound propagation General effects Sound speed Geometric spreading Air absorption Direct path Early reflections Reverberation Distinct echo Transmission Other sound phenomena Diffraction Refraction Interference Implementation Synthesizing binaural sounds Lomont s Fast Fourier Transform (FFT) HRTF database HRTF interpolation Delay filter Sound propagation Ray casting algorithm Direct path transmission Reflections Ray casting with GPU First order reflections First order reflections algorithm Second order reflections Directional reflections Reverberation Blending listener and source environment Directional reverb Result Direct path transmission Reflections Reverberation Binaural filter Delay filter Optimizations Conclusion Discussion References 3

4 Background Virtual reality (VR) is as of today expected to seriously establish itself in the market of gaming. While the realism and quality of visual computer graphics have quickly evolved over the years to be sufficient for VR, the sound techniques have remained primitive without any major evolution towards increased realism. Purpose and problem statement The purpose of the thesis is to explore the possibilities of implementing real time dynamic sound propagation techniques in combination with binaural sound synthesis for VR headsets. Which techniques are suitable for real time dynamic sound propagation? How can binaural sound synthesis be used in combination with dynamic sound propagation? How can the above techniques be further optimized? 4

5 Theory What is a Virtual Reality (VR) headset? When writing this, the most anticipated VR headset is the Oculus Rift. The company, Oculus, are yet to release a consumer version but have created a development kit that developers can use to create VR content before the consumer release. This thesis work is primarily targeted towards this development kit. An Oculus Rift, Development kit The development kit is a pair of goggles that presents a stereoscopic 3D image with 110 degrees field of view to the user. It also tracks the users head rotation with low latency and high precision. This creates a strong feeling of immersion in the virtual world and a sense of actually being in another place. 1 1 The Oculus Rift and Immersion through Fear. ADAM HALLEY PRINABLE. (accessed ) 5

6 Binaural sound This section describes an algorithm for synthesizing the binaural effect in real time and introduces the terms used later in the implementation section. What is Binaural sound? Binaural sound can be created by placing microphones in the ears of a real person or on a dummy head. Whenever a sound is recorded with this setup, the left ear will record the left part of a stereo channel and the right ear will record the right part. If one would listen to this recording using stereo headphones, it will sound close to what it would sound like in reality. 2 A binaural recording setup The effect works best if the recording is done with a perfect body model of the listener. This is because every person have a unique body and ear shape which affects how the sound is picked up by the microphones. This means that no one will perceive a sound in the exact same way as another person. The way that a sound is manipulated for a specific person is called the Head related transfer function (HRTF). Head-related impulse response (HRIR) The Head related impulse response (HRIR) is the recorded ear impulse response from a specific angle. Head-related transfer functions (HRTF) The way that a sound is manipulated for a specific person, and angle, is called the Head related transfer function (HRTF). The HRTF is the fourier transform of the Head related impulse response. 2 The Inventor of Stereo: The Life and Works of Alan Dower Blumlein Robert Charles Alexander (Focal Press, 1999) 6

7 Fast Fourier Transform (FFT) The Fast Fourier Transform (FFT) is an algorithm to calculate the Discrete Fourier Transform (DFT). It transforms a signal from the time domain to the frequency domain. 3 Virtual binaural sound effect To apply the binaural effect to a sound in a virtual world, an HRTF/HRIR is needed for each angle that the sound can be heard from. Or rather, as many as it takes to make the difference between two neighbouring HRTFs/HRIRs unnoticable. The sound also have to be in mono. To simulate that a sound is heard from a certain angle one can convolve the HRIR with the sound. This is slow since the time complexity of convolution is O (n 2 ) where n is the number of samples. It is possible to reduce the time complexity to O (n log n) by applying a Fast Fourier Transform (FFT) to the sound and do pointwise multiplication in the frequency domain with the corresponding HRTF. Pseudo code for creating the binaural effect in a simulation 4 These steps are repeated every time the simulation updates: Calculate the angle between the listener and the sound source Select the HRIR for the closest matching listening angle Convolve the sound with the HRIR Play the convolved sound Interpolating between HRTFs Interpolation is needed for at least two reasons. Whenever the angle change too drastically, there might be a stutter in the sound. A simulation which runs at 60 frames per second has a ~16.6 millisecond delay between each frame. That timespan is enough to turn your head over such a long distance so that this sound artifact might appear. The other reason is because of the denseness of the available HRIRs. There might be a noticeable difference between two neighbouring angles if there aren t enough HRIRs. This could be solved by approximating new HRIRs for non existing angles. 5 3 Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest. Introduction to Algorithms. Third edition. Clifford Stein. The MIT Press HRTF Measurements of a KEMAR Dummy Head Microphone, Bill Gardner and Keith Martin (MIT Media Lab 1994) (accessed ) 5 Two approaches for HRTF interpolation. Gustavo H. M. de Sousa, Marcelo Queiroz (accessed ) 7

8 Model for sound propagation When a sound source emits sound waves the waves are affected in various ways by the environment before reaching the listener. These effects play an important role in how the sound is perceived by the human brain. General effects Sound speed The speed of sound is highly dependent on the medium it travels through. Even in air the speed is not constant since the condition of the air will vary. This thesis will however neglect this and assume a constant sound speed of 343 m/s (approximate speed in air) regardless of the medium. Geometric spreading The intensity of a sound will decrease over distance due to geometric spreading. The loss is independent of frequency and, assuming a point source, the reduction will be relative to the inverse square distance. In ideal conditions (a point sound source with nothing but air surrounding it) the following formula can be used: r 2 L 2 = 10 lg( 1 r 2 2 ) + L 1 where L 2 is the intensity (in db) at distance r 2 and L 1 is a known intensity at distance r 1. This results in a 6 db reduction for each doubling of the distance from the source. 6 Air absorption All sound waves are also affected by energy absorption when travelling through air. This will cause the intensity of the sound (amplitude of the wave) to decrease. The energy loss depends on the frequency of the wave. A higher frequency will suffer a greater energy reduction over distance. The reduction is also dependent on the temperature and humidity of the air. Air absorption will however only be noticeable when dealing with long distances or very high frequencies and can in many cases be neglected. 3 Direct path The direct path is the straight path between the listener and the sound source. This is often where the first sound waves to reach the listener comes from since it is the shortest possible path. 6 Sound propagation, Simon Fraser University studio/handbook/sound_propagation.html (accessed ) 8

9 Early reflections Sound waves that reflect once or twice off a surface and reach the listener within approximately ms after the first perceived wave are called early reflections. Even though these waves do not reach the listener simultaneously they will not be perceived as separate but instead tend to reinforce the sound. They will however give the listener information about the environment, so called spatial impression. The reflections also usually originates from other directions than the 7 8 sound source which will affect how the listener perceives the sound. Reverberation Reflected sound waves reaching the listener after the early reflections are called reverberation (or late reflections). Consisting of a large number of echoes reaching the listener in very quick succession these reflections can not be distinguished from each other but are instead perceived as a persistence of the original sound rather than a new sound. Over time these waves will arrive with gradually lower intensity due to the energy loss from a higher number of reflections. This will produce a fade in the intensity of the reverb. The characteristics of the reverb are highly dependent on the environment, specifically the size of the room and the materials used in the room. In a small room the waves will quickly bounce off the walls a number of times and therefore quickly lose enough energy to be inaudible. In a large room the energy reducing reflections will be less time frequent which produces a longer lasting reverb. The material of a reflection point determines how much energy is lost. Soft materials will likely cause a high energy reduction while the opposite is the case for hard materials. This property is defined by an absorption coefficient which is a value between 0 and 1 where 0 will absorb 0% of the energy and 1 will absorb 100% of the energy. The time for a sound to become inaudible is called reverberation time. This is defined as the time for a sound to drop 60 db from its original intensity. The most common approximation used to calculate this time is Sabine s equation: T = V Sα where T is the reverberation time (in seconds), V the volume of the room, S the surface area in the room and α the average absorption coefficient. 9 Distinct echo Reflections reaching the listener with a distinct delay (at least 100 ms) after the first perceived wave and with sufficient intensity to distinguish itself from reverberation will be perceived as a 7 Physical audio signal processing, Julius O. Smith III (accessed ) 8 Room acoustics, McGill University (accessed ) 9 Calculation of reverberation time, Davidson physics (accessed ) 9

10 separate sound by the human brain. This produces a distinct echo which will not mix with the original sound as in the case with reverberation. 10 Transmission Sound transmission is the basic principle of sound waves travelling through a medium. Part of the sound the listener hears will usually (unless total isolation) travel directly from the sound source to the listener. Even though a direct path is blocked by an obstacle between a source and a listener, sound waves may still be able to travel through the obstacle. This ability is highly dependent on the material and thickness of the obstacle. Lower frequencies have a better ability to pass through obstacles which is very noticeable when opening and closing a door to a room with a sound source (e.g. music player). 11 Other sound phenomena Diffraction Diffraction is a known wave phenomena and therefore also applies to sound waves. It gives sound waves the ability to bend around obstacles and pass through small openings. The result is that even though a direct path does not exist between the sound source and the listener, 12 sound waves can still reach the listener without the need to reflect off a surface. This phenomena will be neglected in this thesis. Refraction When a sound wave travels from one medium to another, refraction causes it to change its direction. This phenomena will be neglected in this thesis. 10 Echo vs. Reverberation, The Physics Classroom (accessed ) 11 Sound transmission, Gerald S. Wilkinson (accessed ) 12 Reflection, Refraction, and Diffraction, The Physics Classroom (accessed ) 10

11 Interference Interference is the phenomena caused by multiple waves affecting each other when travelling in the same medium. Depending on the difference in phase of the waves, they can either cause constructive interference (the amplitudes will be added) or destructive interference (the amplitudes will be subtracted). This phenomena will be neglected in this thesis. Implementation The environment for the implementation consists of a cube based 3D world written in C# and OpenGL. The FMOD audio library has been used for playing and modifying audio. FMOD includes DSP effects for low pass filtering and reverb and also support for custom DSP effects. Screenshot from the cube world. Synthesizing binaural sounds This chapter describes how the binaural effect is implemented. Lomont s Fast Fourier Transform (FFT) An existing C# implementation of the Fast Fourier Transform (Chris Lomont, 2010) is used. This particular FFT implementation was designed for sound synthesis and it uses a faster, one dimensional, more memory efficient version of the transform The Fast Fourier Transform, Chris Lomont (accessed ) 11

12 HRTF database The HRIR s used were recorded by Bill Gardner and Keith Martin (MIT Media Lab 1994). They recorded 368 impulse responses on a half sphere around the left ear, using a KEMAR dummy head at a fixed distance of 1.4 meters. This data can be used for both the left and right ear since the impulse responses are symmetrical. 14 Each impulse response is stored in a wav file and there are 128 samples per ear in each file. The angle relative to the listener is embedded in the file name using polar coordinates. The vertical angle is called elevation and the horizontal is called azimuth. 15 When the simulation starts, every HRIR is loaded into memory and transformed using FFT to get a HRTF for every angle. A direction vector is calculated which matches the cube world coordinate system for every HRTF. This vector is later used to select the closest matching HRTF for the different listening angles. 1. // Calculatedirectionvectortobeusedwhenselectinghrtf 2. Vector3d=newVector3(0, 0, 1); 3. floatradazim=(azimuth/ 180.0f)*(float)Math.PI; 4. floatradelev=2.0f*(float)math.pi-(elevation/ 180.0f)*(float)Math.PI; 5. d=d* Matrix4.CreateRotationX(radElev); 6. d=d* Matrix4.CreateRotationY(radAzim); directionvector=d; 9. directionvector.normalize(); Code snippet showing how a direction vector is calculated from the azimuth and elevation of a HRIR. The values for elevation and azimuth are extracted from the file name. To then select the correct HRTF, all the loaded transfer functions are iterated and the one with the closest matching direction vector is selected. 14 HRTF Measurements of a KEMAR Dummy Head Microphone, Bill Gardner and Keith Martin (MIT Media Lab 1994) (accessed ) 15 HRTF Measurements of a KEMAR Dummy Head Microphone, Bill Gardner and Keith Martin (MIT Media Lab 1994) (accessed ) 12

13 HRTF interpolation This implementation focus on interpolation over time. It is needed because sound artifacts can be heard when the angle between the listener and the sound source differ too much between frames. Instead of instantly switching to a new HRTF every frame, all the samples in the previous HRTF are iterated and pulled towards the new ones. This makes the transition to the new HRTF last for several frames. 1. publicvoidinterpolatetowards(hrtfother) 2. { 3. for(inti=0; i<samples.length*2; i++) 4. { 5. doublediffleft=other.leftfourier[i]- LeftFourier[i]; 6. doublediffright=other.rightfourier[i]- RightFourier[i]; LeftFourier[i] +=diffleft/ transition; 9. RightFourier[i] +=diffright/ transition; 10. } 11.} This code snippet shows how the varying HRTF is pulled towards the new one (called other). The code is called once per frame. The speed of the transition can be altered by changing the transition variable. Interpolation for non existing HRTFs is ignored in this thesis. 13

14 Binaural filter implementation The sounds are played with a sample rate of 44.1 khz. This means that when the simulation runs at 60 frames per second, at least = 735 samples needs to be processed every frame 60 for each sound playing. The filter function works in chunks of 128 samples which is comfortable since it s the size of the HRTFs. It is therefore needed to call the filter function at least 6 times per frame for each sound. More than 6 is recommended to avoid sound artifacts when the frame rate drops. It is a tradeoff between responsiveness and robustness. A simplified figure of the filter function. The filter function works like this: 128 samples from a playing sound is sent as argument to the filter function. FFT is then applied to the samples, one time for each ear. The sound is multiplied with the interpolated HRTF and the result is inverse transformed. The result is the same as convolution with the sound and the impulse response. The result of this convolution is 255 samples long. The first 128 of those samples are sent to the outbuffer (and out through the headphones) and the rest is saved in a temporary buffer. The content of this temporary buffer is added to the output the next time the filter function is called. The reason for this is that the output buffer size of a filter function in FMOD has to be the same as the size of the input buffer. 14

15 Delay filter A custom delay filter is used to delay the sound based on the distance travelled before it reaches the listener. It uses a rotating buffer with index pointers which are updated when the delay is changed. s oundspeed = s amplerate = 343 m/s hz d elaysamples = samplerate distance soundspeed 1. uintsaveindex=(currentindex+delaysamples)% MAX_DELAY; // Savetodelaybufferandsetoutbuffers 4. for(inti=0; i<length; i++) 5. { 6. delaybufferleft[(i+saveindex)% MAX_DELAY] =inbuffer[i* 2]; 7. delaybufferright[(i+saveindex)% MAX_DELAY] =inbuffer[i* 2+1]; outbuffer[i* 2] =delaybufferleft[(i+currentindex)% MAX_DELAY]; 10. outbuffer[i* 2+1] =delaybufferright[(i+currentindex)% MAX_DELAY]; 11.} currentIndex=(currentIndex+length)% MAX_DELAY; Code snippet from the delay filter. saveindex keeps track on where to save the outgoing samples in the delay buffers. delaysamples is the number of samples to delay the sound. MAX_DELAY is the longest possible delay and is set to samples in this implementation (~0.37 milliseconds) 15

16 Sound propagation To deal with the multiple number of directions and effects of a single sound, the sound source is divided into multiple internal sound sources. These internal sources require an individual sound channel and DSP effects. Overview of the sound source structure Ray casting algorithm Only one ray per sound source needs to be casted for the direct path calculations, hence the ray casting is not a performance issue. The implemented algorithm takes advantage of the simple geometrical nature of the cube world and uses Bresenham s line algorithm for three dimensions to trace the intersected cubes in a line The Bresenham Line Drawing Algorithm, Colin Flanagan (accessed ) 16

17 Direct path transmission In most cases when a sound source is heard, part of the sound is transmitted through the direct path. Without any obstacle blocking the direct path this would result in the sound reaching the listener relatively unaffected. What is more important are the cases where the direct path is blocked, particularly when the direct path is the only part of the sound reaching the listener, i.e. no reflections or diffractions arrive. The most important effect when the sound transmits through obstacles is the possible loss of high frequencies. One could therefore treat the direct path as a low pass filtered source, where the cutoff frequency is dependent on the properties of the obstacles in the direct path. For example, a perfectly sound isolated wall would block all frequencies while a thin layer of fabric may have almost no effect at all. To determine if the direct path should be low pass filtered, and with what cutoff frequency, the direct path needs to be traced. This is achieved by using the implemented ray casting algorithm which detects obstacles and takes into account a transmission coefficient for the specific obstacle. The coefficient is a value between 0 and 1 where 0 means no frequencies can be transmitted and 1 means all frequencies can be transmitted. For multiple obstacles the coefficients are multiplied. The maximum human hearing frequency is approximately 20 khz and by using the calculated coefficient the cutoff frequency can be calculated by the following formula: f c = * K total where f c is the cutoff frequency and K total is the total obstacle coefficient. The direct path is treated as an individual sound source in combination with a delay, a low pass filter and a binaural DSP. Reflections Dynamic reflection calculation is by nature a ray casting problem and by using the implemented ray casting algorithm all first and second order reflection points can be found with ease. This however quickly becomes computationally expensive with n rays casted for the first order reflections resulting in a total of n 2 rays for each point source. For decent results at least n=100 is needed resulting in a total of rays for each sound source. Even though this can easily be parallelized using multiple CPU cores it would still require a very efficient ray casting algorithm and would be highly dependent on the geometrical complexity of the environment. 17

18 Ray casting with GPU This thesis explores the possibility to calculate the reflections on the GPU instead, taking advantage of the efficient parallelization nature of the GPU. Complex geometry will still affect the performance but with a much lower factor than with the CPU based ray casting algorithm. First order reflections The problem of calculating first order reflections is a similar problem as calculating point light shadows in a 3D application. This is because both problems need to find the points in the environment which can be seen from both the position of the source and the position of the viewer (or listener in this case). Note that the normal vector of the point is neglected here meaning the reflection points will not necessarily be perfect reflections but rather points where a reflection is highly probable. 17 The solution for shadow calculations is called shadow mapping and a similar technique is used in this implementation. One advantage in this case is that very low resolutions are sufficient for the sound reflections (compared to shadow calculations) and these can easily be scaled for increased precision or better performance. This implementation uses textures for the listener cube map and for the sound sources. This is equivalent of 32*32*6 = 6144 casted rays for the listener and 16*16*6 = 1536 rays for each sound source. First order reflections algorithm Render distance cube map for the listener For each sound source: Render distance cube map with listener distance cube map in VRAM: In pixel shader: Output the distance to the current pixel in alpha channel (needed later for reverb estimation). Retrieve the distance value from the listener cube map in the direction from the listener to the current pixel position in the shader Compare this distance with the length of the vector from the listener position and the position of the current pixel If the distance is shorter from the listener s point of view then the point is not a reflection point: Output zero vector. If not then the point is a first order reflection point: Output the world position of the reflection as pixel data. The resulting textures are then downloaded from the GPU to the CPU and the reflection points are extracted. 1. #version Shadow Mapping, OpenGL Tutorial tutorial.org/intermediate tutorials/tutorial 16 shadow mapping/ (accessed ) 18

19 3. outvec4fragcolor; 4. uniformvec3soundposition; 5. uniformvec3listenerposition; 6. uniformfloatradius; 7. uniformsamplercubelistenerdistances; 8. uniformfloatlistenerradius; 9. uniformfloatbias; smoothinvec3worldPos; 12.smoothinvec3worldNormal; voidmain(void) 15.{ 16. //Vectorfromthesoundsourcetothepixelinworldcoordinates 17. vec3distancevec=worldpos- soundposition; //Outputthescalardistanceforreverbestimation 20. floatdisttosound=length(distancevec); 21. fragcolor.a=disttosound/radius; //Vectorfrompixeltolistener 24. vec3listenervec=worldpos- listenerposition; //Vectorfromthelistenertothesoundsource 27. vec3listenertosound=soundposition- listenerposition; //Invertzandytomakecubemapsamplingcorrect 30. listenervec.z=-listenervec.z; 31. listenervec.y=-listenervec.y; //Samplethenormalizeddistancetothepixelfromthelistener'sviewand scalewithradius 34. floatlistenerdist=texture(listenerdistances, listenervec)*listenerradius; 19

20 //Performthedistancecomparisontodetectreflectionpoint 37. if(listenerdist<length(listenervec)- bias) 38. { 39. //Noreflection 40. fragcolor.r=0; 41. fragcolor.g=0; 42. fragcolor.b=0; 43. } 44. else 45. { 46. //Reflection, outputworldposition 47. fragcolor.r=worldpos.x; 48. fragcolor.g=worldpos.y; 49. fragcolor.b=worldpos.z; 50. } 51.} Code showing the pixel shader for the first order reflection pass Second order reflections The computational complexity of second order reflections is very high even with a GPU based algorithm and it would not be feasible to repeat the above algorithm for each ray from the source. The second order reflections are therefore neglected in this implementation. Directional reflections Treating each reflection point as an individual sound source would produce a good result but this is not an option when each sound source needs heavy computations for its binaural synthesis. The number of sound sources each sound is divided into should be kept at a minimum to maintain decent performance. However, since reflections can reach the listener from any direction it is important to have some form of directional presence from the reflections. To achieve this the directions from the listener s point of view are divided into four areas according to the world axes: positive x, negative x, positive z and negative z. The y axis (vertical axis in the world) is neglected for optimization reasons. It also does not affect the listener as 20

21 much as the other axes since the human brain is much less sensitive to changes in vertical position of a sound source. 18 An average position for all reflection points contained in each of the areas is then calculated. This position is used as a sound source representing the average reflections from one of the mentioned world axes. The number of reflection points dictates the intensity of the sound source. The image illustrates how the reflection points are divided into areas around the listener. The sound from the reflection sources are independently delayed according to the total distance travelled. This can easily be calculated by simply adding the distance from the reflection source to the sound source to the distance from the reflection source to the listener and dividing by the speed of sound. This individual delay results in a very evident effect of directional reflections which is most noticeable when only one of the average reflection points is far away from the listener. Reverberation The extreme complexity of calculating reverberation ray by ray makes it much more feasible to use an approximation. The most important factors deciding the reverb in a room are the size of the environment and the materials in the room. For the reflection calculations the distances to the surrounding environment were calculated for both the listener and each sound source. Both these can be used for the reverb estimation. With the average distance the environment can be approximated as a sphere with the radius equal to 18 Spatial Sound Localization in an Augmented Reality Environment, Sodnik, Tomazic, Grasset, Duenser, Billinghurst OZCHI SpatialSoundInAR.pdf (accessed ) 21

22 4μ the average distance μ. The volume of the sphere therefore is 3 3 and the surface area 4πμ2. Inserting this in Sabine s equation results in: T = 0.017μ α where T is the reverberation time (in seconds), μ the average distance in the room and α is the average absorption coefficient. This is the formula used in the implementation. Blending listener and source environment The reverb is not only dependent on the environment of the sound source but also the environment of the listener. How much each affects the resulting sound depends on the ability of the sound source environment to absorb reflections. To blend these the following formula can be used: μ = βμ l + (1 β )μ s where μ is the blended distance average, β is the blending weight, μ l the average distance for the listener and μ s the average distance for the sound source. 19 β is dependent on the sound absorption in the environment around the sound source and can be calculated by β = 1 6 log(1 α) + 1 where β is the blending weight and α is the average surface absorption. This results in the listener environment being more important if the environment surrounding the sound source has a high surface absorption and vice versa. 1. publicvoidupdatereverb(floatsounddistance, floatlistenerdistance, ref FMOD.DSPreverb) 2. { 3. //Constantairabsorptionintheimplementation 19 Aural proxies and directionally varying reverberation for interactive sound propagation in virtual environments, Antani L, Manocha D (accessed ) 22

23 4. floatb=0.7f; //Blendtheaveragedistanceofthesoundenvironmentandthelistener environment 7. floatu=b* sounddistance+(1- b)* listenerdistance; //CalculatereverberationtimeaccordingtoSabine'sequation 10. floatt=(0.017f* u)/ 0.1f; //Setthedecaytime(reverberationtime) 13. reverb.setparameter((int)fmod.dsp_sfxreverb.decaytime, T); //Calculateandsetreverbdelay 16. floatreverbdelay=u/ f; 17. reverb.setparameter((int)fmod.dsp_sfxreverb.reverbdelay, reverbdelay); 18. } This code snippet shows how the reverb is updated. Directional reverb Using the already implemented sound sources for directional reflections, directional reverb can be achieved by simply adding individual reverb DSP effects to each reflection source. The individual reverb time for the reflection source is calculated as above but by only using distance values from the specific world axis for the calculation of the listener environment. This results in a directionally dependent reverb. 23

24 Result Direct path transmission The direct path sound source gives the listener a steady directional sense of the sound position and the low pass filter creates a realistic muffled effect for occluded sounds. There are some issues however; the transition from unfiltered to filtered and vice versa can feel abrupt and unnatural and additional work on interpolation is needed for a smooth experience. The technique is dependent on a ray casting algorithm for the environment but since only one ray per sound source needs to be casted, performance should not be an issue. Reflections The approximation of using four reflection channels with individual reverb and delay is sufficient to give the listener a directional sense of reflections, echoes and reverb. For example, the reflections in combination with a muffled direct path can change the primary sense of the sound direction for the listener. This is expected when a sound source is around a corner. The direct path will be blocked so the full range of frequencies will come from reflections and diffractions which alters the listener s perception of the sound direction. The sound direction will be perceived as to the corner instead of to the sound source. The individual delay and reverb for the reflection sources make sure the directions of significant differences in delay and reverb can be perceived by the listener. The absence of second order reflections naturally causes too little sound to reach the listener, which is most apparent when no first order reflections exist. The real time calculations for real second order reflections is however an expensive task and is preferably avoided. The problem can be slightly disguised by using a low volume non directional ambient channel with only a delay effect applied. The volume can be relative to the distance to the source but should have a low maximum to not intervene too much with the physically correct sound. The reflections also suffer from too abrupt changes in some cases and need to be smoothened by interpolation. Reverberation The dynamic reverb estimation produces a noticeable difference in reverb when moving a sound source to a different environment. The transitions are smooth and the blending of the listener and sound environment makes sure the technique can handle more complex scenarios. The environment analysis can be easily implemented on the GPU with low resolution cube maps, eliminating the need of ray casting on the CPU. 24

25 Binaural filter The quality of the interpolation, as it is currently implemented, depends on the transition rate. With a fast transition the sound might crackle but it is more responsive. With a slow transition there is no crackle but the transition can be noticed. In this implementation, the transition rate was set to the fastest possible where the sound never crackled. The transition is slightly noticeable on rapid head movements. It is hard to make objective comments about how much the binaural filter enhances the spatial location feeling of the sound. The experience varies from person to person since everyone have their unique HRTF. Even though the HRTFs from the KEMAR dummy head seems to be sufficient for most people, the experience could be enhanced by using tailor made HRTFs for each listener. Since all the impulse responses used in this thesis were recorded at a fixed distance of 1.4 meters, an important feature of binaural recordings were lost. That is all the HRTFs for different distances. When the distance is greater, the HRTFs stay almost the same, but they change dramatically closer to the head. This means that without the distance variable, certain effects like someone whispering in your ear, will not seem realistic. It is hard to record HRTFs for both every angle and distance with good results. It might be possible to run a simulation using a model of a human head, approximate the HRTF for each angle and distance and then load them all at runtime. The listener experience could be greatly enhanced by using personalized HRIRs. However, to record impulse responses for different angles and distances, one would need a sound chamber and expensive equipment. The HRIRs would probably also differ a lot with different recording setups. A more efficient way to personalize the HRIRs would be to estimate them using ear and body measurements. This would ensure consistency but might not achieve the same level of realism as a good recording. Another thing worth exploring is how the length of the HRTFs affect the spatial quality of the sound. A long HRTF is computationally heavy to convolve but might result in a richer experience. Delay filter When moving the listener towards or away from a sound source, the sound will sometimes crackle. This is because the filter instantly switches to another point in time of the sound. In reality there would be a doppler effect and the sound would stretch or shrink. For future improvement, one could recreate this doppler effect in the filter function. Another solution to the sound crackles could be to have two alternating delay buffers and interpolate between them. Or mix between them when changing the delay to smooth out the transition. This would however create a small flanger effect. 25

26 Optimizations Pixel transfer optimization Transferring pixels from the graphics card to the RAM is usually an expensive process and may bottleneck an application by making the CPU wait until the transfer is completed. This can be improved by using Pixel Buffer Objects which allow the memory transfer to be performed asynchronously and not stall the CPU. The pixel transfers for each sound source in this implementation is not necessarily needed every single frame. All sound sources can be split into two groups and each frame the groups will alternate between initiating a pixel transfer and actually reading the pixels. In optimal conditions this may almost completely remove the slow process of transferring the pixels to the RAM. The drawback is that the data from the pixel read will be one frame old, this should however not be noticeable in this specific application. Reducing draw calls with geometry shader In the above implementation, each sound source (and the listener) updates a cube map with information about distance and/or reflection points. This update requires 6 draw calls for each cube map (one call per cube face). This is possible to reduce to one single draw call by using a geometry shader. This optimization can be useful when the number of draw calls in the application is critical. Convolution on GPU The performance of the sound convolution was greatly improved when FFT was used instead of HRIR convolution. A way to improve it even further could be to apply the FFT on the GPU. In this thesis it could be done by uploading the samples in a texture and make a draw call from the binaural filter function. The result could be written and later read from a texture render target. 26

27 Conclusion Major improvements over conventional sound techniques in gaming have been implemented and shown to run in real time with at least 60 FPS. Several sound propagation phenomenon have been implemented; direct path transmission, first order reflections and reverberation estimation. These have been successfully combined with binaural synthesis to achieve three dimensional directional sound perception. The computation costs are not negligible however and may need to be compromised with other aspects of realism in applications requiring heavy computations. Optimizations are therefore key and the thesis encourages further analysis of possible optimizations. Discussion This thesis has covered the theory behind sound propagation and binaural synthesis and explored the possibilities to use this theory in development towards VR headsets like the Oculus Rift. An implementation has been programmed with dynamic real time sound propagation in combination with binaural synthesis where the results have been evaluated. With the new generation of VR expected to transform the possibilities of visual immersion, an evolution in sound immersion should be highly anticipated. To achieve perfect VR all human senses need to be realistically triggered and with visual graphics approaching photorealism and 3D perception, sound should definitely be next in line. Not much have happened in sound immersion in gaming in the last decade except for a few exceptions. Binaural sound in games have been a hard sell because of the headphone requirement. However, this requirement makes more sense when already wearing a headset for the visuals. It seems like a logical step to take advantage of the head tracking for a richer, more immersive, sound experience. The result of the thesis have encouraged the writers to investigate the matter further. We believe realistic sound immersion will become more important in the future when VR becomes mainstream and our results show that a big improvement is definitely feasible in sound immersion for gaming. A slight problem is the competition over computational power with graphics, physics and other game logic. Optimizations and the growing capability of graphics cards should help remove this problem and in VR a slight drop in graphics might be worth a significant boost in sound immersion. 27

28 References The Fast Fourier Transform, Chris Lomont (accessed ) HRTF Measurements of a KEMAR Dummy Head Microphone, Bill Gardner and Keith Martin (MIT Media Lab 1994) (accessed ) Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest. Introduction to Algorithms. Third edition. Clifford Stein. The MIT Press The Inventor of Stereo: The Life and Works of Alan Dower Blumlein Robert Charles Alexander (Focal Press, 1999) Two approaches for HRTF interpolation. Gustavo H. M. de Sousa, Marcelo Queiroz (accessed ) A Structural Model for Binaural Sound Synthesis. C. Phillip Brown and Richard O. Duda (accessed ) The Oculus Rift and Immersion through Fear. ADAM HALLEY PRINABLE. (accessed ) Sound propagation, Simon Fraser University studio/handbook/sound_propagation.html (accessed ) Physical audio signal processing, Julius O. Smith III (accessed ) Room acoustics, McGill University (accessed ) Calculation of reverberation time, Davidson physics (accessed ) Echo vs. Reverberation, The Physics Classroom (accessed ) Sound transmission, Gerald S. Wilkinson (accessed ) Reflection, Refraction, and Diffraction, The Physics Classroom (accessed ) The Bresenham Line Drawing Algorithm, Colin Flanagan (accessed ) Shadow Mapping, OpenGL Tutorial tutorial.org/intermediate tutorials/tutorial 16 shadow mapping/ (accessed ) Spatial Sound Localization in an Augmented Reality Environment, Sodnik, Tomazic, Grasset, Duenser, Billinghurst OZCHI SpatialSoundInAR.pdf (accessed ) Aural proxies and directionally varying reverberation for interactive sound propagation in virtual environments, Antani L, Manocha D (accessed ) 28

Modelling, Auralization and Acoustic Virtual Reality ERIK MOLIN

Modelling, Auralization and Acoustic Virtual Reality ERIK MOLIN Modelling, Auralization and Acoustic Virtual Reality ERIK MOLIN Overview Auralization Overview & motivation Audio sources Room models Receiver modelling Auralization what and why? For a given space, sound

More information

Comparison of Spatial Audio Simulation Systems

Comparison of Spatial Audio Simulation Systems Comparison of Spatial Audio Simulation Systems Vladimír Arnošt arnost@fit.vutbr.cz Filip Orság orsag@fit.vutbr.cz Abstract: Modelling and simulation of spatial (3D) sound propagation in real-time applications

More information

Auralization and Geometric acoustics ERIK MOLIN, HANNA AUTIO

Auralization and Geometric acoustics ERIK MOLIN, HANNA AUTIO Auralization and Geometric acoustics ERIK MOLIN, HANNA AUTIO Auralization what and why? For a given acoustic situation (space, sound source(s), listener position ), what sound does the listener hear? Auralization

More information

Room Acoustics. CMSC 828D / Spring 2006 Lecture 20

Room Acoustics. CMSC 828D / Spring 2006 Lecture 20 Room Acoustics CMSC 828D / Spring 2006 Lecture 20 Lecture Plan Room acoustics basics Structure of room impulse response Characterization of room acoustics Modeling of reverberant response Basics All our

More information

Interactive Geometry-Based Acoustics for Virtual Environments

Interactive Geometry-Based Acoustics for Virtual Environments Interactive Geometry-Based Acoustics for Virtual Environments Using the Graphics Card by R. Baksteen 4017781 Media & Knowledge Engineering Specialization: Computer Graphics This thesis was submitted for

More information

AFMG. EASE Seminar September 17 th to 21 st 2018, Berlin, Germany. Agenda. Software-Engineering Research Development

AFMG. EASE Seminar September 17 th to 21 st 2018, Berlin, Germany. Agenda. Software-Engineering Research Development EASE Seminar September 17 th to 21 st 2018, Berlin, Instructors: Emad Yacoub Hanna Language: English Hours: 09:00-17:00 (please be there at 08:45) EASE Seminars are split into two levels with Level 1 (entry

More information

Physics 9 Friday, September 28, 2018

Physics 9 Friday, September 28, 2018 Physics 9 Friday, September 28, 2018 Turn in HW#3. HW#4 will be due two weeks from today; I will hand out HW#4 Monday. I found a way to run both Odeon and CATT-Acoustic on MacOS without a virtual machine!

More information

Lecture 7 Notes: 07 / 11. Reflection and refraction

Lecture 7 Notes: 07 / 11. Reflection and refraction Lecture 7 Notes: 07 / 11 Reflection and refraction When an electromagnetic wave, such as light, encounters the surface of a medium, some of it is reflected off the surface, while some crosses the boundary

More information

COMPUTER SIMULATION TECHNIQUES FOR ACOUSTICAL DESIGN OF ROOMS - HOW TO TREAT REFLECTIONS IN SOUND FIELD SIMULATION

COMPUTER SIMULATION TECHNIQUES FOR ACOUSTICAL DESIGN OF ROOMS - HOW TO TREAT REFLECTIONS IN SOUND FIELD SIMULATION J.H. Rindel, Computer simulation techniques for the acoustical design of rooms - how to treat reflections in sound field simulation. ASVA 97, Tokyo, 2-4 April 1997. Proceedings p. 201-208. COMPUTER SIMULATION

More information

CODA -- COmputerizeD Auralization

CODA -- COmputerizeD Auralization CODA -- COmputerizeD Auralization MARJAN SIKORA, B. Sc., Faculty of Engineering and Computing, Zagreb, Croatia HRVOJE DOMITROVIĆ, Mr. Sc., Faculty of Engineering and Computing, Zagreb, Croatia CODA (COmputerizeD

More information

Lighting and Shading

Lighting and Shading Lighting and Shading Today: Local Illumination Solving the rendering equation is too expensive First do local illumination Then hack in reflections and shadows Local Shading: Notation light intensity in,

More information

Surrounded by High-Definition Sound

Surrounded by High-Definition Sound Surrounded by High-Definition Sound Dr. ChingShun Lin CSIE, NCU May 6th, 009 Introduction What is noise? Uncertain filters Introduction (Cont.) How loud is loud? (Audible: 0Hz - 0kHz) Introduction (Cont.)

More information

Using Liberty Instruments PRAXIS for Room Sound Convolution Rev 9/12/2004

Using Liberty Instruments PRAXIS for Room Sound Convolution Rev 9/12/2004 Using Liberty Instruments PRAXIS for Room Sound Convolution Rev 9/12/2004 Overview Room Sound Convolution is an operation that allows you to measure, save, and later recreate a representation of the sound

More information

APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION.

APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION. APPLYING EXTRAPOLATION AND INTERPOLATION METHODS TO MEASURED AND SIMULATED HRTF DATA USING SPHERICAL HARMONIC DECOMPOSITION Martin Pollow Institute of Technical Acoustics RWTH Aachen University Neustraße

More information

EASE Seminar Entry Level & Advanced Level

EASE Seminar Entry Level & Advanced Level EASE Seminar Entry Level & Advanced Level This is a general overview of our regular EASE Trainings. Please be aware that this document contains information on both levels we offer. Make sure which one

More information

Introduction to HRTFs

Introduction to HRTFs Introduction to HRTFs http://www.umiacs.umd.edu/users/ramani ramani@umiacs.umd.edu How do we perceive sound location? Initial idea: Measure attributes of received sound at the two ears Compare sound received

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

INTRODUCTION REFLECTION AND REFRACTION AT BOUNDARIES. Introduction. Reflection and refraction at boundaries. Reflection at a single surface

INTRODUCTION REFLECTION AND REFRACTION AT BOUNDARIES. Introduction. Reflection and refraction at boundaries. Reflection at a single surface Chapter 8 GEOMETRICAL OPTICS Introduction Reflection and refraction at boundaries. Reflection at a single surface Refraction at a single boundary Dispersion Summary INTRODUCTION It has been shown that

More information

Reverberation design based on acoustic parameters for reflective audio-spot system with parametric and dynamic loudspeaker

Reverberation design based on acoustic parameters for reflective audio-spot system with parametric and dynamic loudspeaker PROCEEDINGS of the 22 nd International Congress on Acoustics Signal Processing Acoustics: Paper ICA 2016-310 Reverberation design based on acoustic parameters for reflective audio-spot system with parametric

More information

GG450 4/5/2010. Today s material comes from p and in the text book. Please read and understand all of this material!

GG450 4/5/2010. Today s material comes from p and in the text book. Please read and understand all of this material! GG450 April 6, 2010 Seismic Reflection I Today s material comes from p. 32-33 and 81-116 in the text book. Please read and understand all of this material! Back to seismic waves Last week we talked about

More information

What is it? How does it work? How do we use it?

What is it? How does it work? How do we use it? What is it? How does it work? How do we use it? Dual Nature http://www.youtube.com/watch?v=dfpeprq7ogc o Electromagnetic Waves display wave behavior o Created by oscillating electric and magnetic fields

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM)

Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM) Modeling of Pinna Related Transfer Functions (PRTF) using the Finite Element Method (FEM) Manan Joshi Navarun Gupta, Ph. D. Lawrence Hmurcik, Ph. D. University of Bridgeport, Bridgeport, CT Objective Measure

More information

Working with the BCC Z Space I Filter

Working with the BCC Z Space I Filter Working with the BCC Z Space I Filter Normally, if you create an effect with multiple DVE layers, each layer is rendered separately. The layer that is topmost in the timeline overlaps all other layers,

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

Photorealistic 3D Rendering for VW in Mobile Devices

Photorealistic 3D Rendering for VW in Mobile Devices Abstract University of Arkansas CSCE Department Advanced Virtual Worlds Spring 2013 Photorealistic 3D Rendering for VW in Mobile Devices Rafael Aroxa In the past few years, the demand for high performance

More information

AP Physics: Curved Mirrors and Lenses

AP Physics: Curved Mirrors and Lenses The Ray Model of Light Light often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but is very useful for geometric

More information

PHYS:1200 LECTURE 32 LIGHT AND OPTICS (4)

PHYS:1200 LECTURE 32 LIGHT AND OPTICS (4) 1 PHYS:1200 LECTURE 32 LIGHT AND OPTICS (4) The first three lectures in this unit dealt with what is for called geometric optics. Geometric optics, treats light as a collection of rays that travel in straight

More information

newfasant US User Guide

newfasant US User Guide newfasant US User Guide Software Version: 6.2.10 Date: April 15, 2018 Index 1. FILE MENU 2. EDIT MENU 3. VIEW MENU 4. GEOMETRY MENU 5. MATERIALS MENU 6. SIMULATION MENU 6.1. PARAMETERS 6.2. DOPPLER 7.

More information

Plane Wave Imaging Using Phased Array Arno Volker 1

Plane Wave Imaging Using Phased Array Arno Volker 1 11th European Conference on Non-Destructive Testing (ECNDT 2014), October 6-10, 2014, Prague, Czech Republic More Info at Open Access Database www.ndt.net/?id=16409 Plane Wave Imaging Using Phased Array

More information

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0 zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their

More information

CS451Real-time Rendering Pipeline

CS451Real-time Rendering Pipeline 1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does

More information

Visualizing diffraction of a loudspeaker enclosure

Visualizing diffraction of a loudspeaker enclosure Visualizing diffraction of a loudspeaker enclosure V. Pulkki T. Lokki Laboratory of Acoustics and Audio Signal Processing Telecommunications Software and Multimedia Laboratory Helsinki University of Technology,

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics 23.1 The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,

More information

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural 1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that

More information

CS 465 Program 5: Ray II

CS 465 Program 5: Ray II CS 465 Program 5: Ray II out: Friday 2 November 2007 due: Saturday 1 December 2007 Sunday 2 December 2007 midnight 1 Introduction In the first ray tracing assignment you built a simple ray tracer that

More information

Light: Geometric Optics (Chapter 23)

Light: Geometric Optics (Chapter 23) Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

Conemarching in VR. Johannes Saam Mariano Merchante FRAMESTORE. Developing a Fractal experience at 90 FPS. / Framestore

Conemarching in VR. Johannes Saam Mariano Merchante FRAMESTORE. Developing a Fractal experience at 90 FPS. / Framestore Conemarching in VR Developing a Fractal experience at 90 FPS Johannes Saam Mariano Merchante FRAMESTORE / Framestore THE CONCEPT THE CONCEPT FRACTALS AND COLLISIONS THE CONCEPT RAYMARCHING AND VR FRACTALS

More information

CS 4620 Program 4: Ray II

CS 4620 Program 4: Ray II CS 4620 Program 4: Ray II out: Tuesday 11 November 2008 due: Tuesday 25 November 2008 1 Introduction In the first ray tracing assignment you built a simple ray tracer that handled just the basics. In this

More information

Chapter 5.5 Audio Programming

Chapter 5.5 Audio Programming Chapter 5.5 Audio Programming Audio Programming Audio in games is more important than ever before 2 Programming Basic Audio Most gaming hardware has similar capabilities (on similar platforms) Mostly programming

More information

Physics 1C DIFFRACTION AND INTERFERENCE Rev. 2-AH. Introduction

Physics 1C DIFFRACTION AND INTERFERENCE Rev. 2-AH. Introduction Introduction The material for this chapter is discussed in Hecht, Chapter 25. Light exhibits many of the properties of a transverse wave. Waves that overlap with other waves can reinforce each other or

More information

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Local Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Graphics Pipeline Hardware Modelling Transform Visibility Illumination + Shading Perception, Interaction Color Texture/ Realism

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN

AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN AUDIBLE AND INAUDIBLE EARLY REFLECTIONS: THRESHOLDS FOR AURALIZATION SYSTEM DESIGN Durand R. Begault, Ph.D. San José State University Flight Management and Human Factors Research Division NASA Ames Research

More information

25-1 Interference from Two Sources

25-1 Interference from Two Sources 25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 1 Overview of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing and the Mirror Equation The Refraction of Light Ray Tracing

More information

Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM)

Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM) Modeling of Pinna Related Transfer Functions (PRTF) Using the Finite Element Method (FEM) Manan Joshi *1, Navarun Gupta 1, and Lawrence V. Hmurcik 1 1 University of Bridgeport, Bridgeport, CT *Corresponding

More information

Lecture Outline Chapter 26. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc.

Lecture Outline Chapter 26. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc. Lecture Outline Chapter 26 Physics, 4 th Edition James S. Walker Chapter 26 Geometrical Optics Units of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Wallace Hall Academy

Wallace Hall Academy Wallace Hall Academy CfE Higher Physics Unit 2 - Waves Notes Name 1 Waves Revision You will remember the following equations related to Waves from National 5. d = vt f = n/t v = f T=1/f They form an integral

More information

S4B Ringtone Creator Soft4Boost Help S4B Ringtone Creator www.sorentioapps.com Sorentio Systems, Ltd. All rights reserved Contact Us If you have any comments, suggestions or questions regarding S4B Ringtone

More information

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. Optics 1- Light Nature: a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. The particles were either emitted by the object being viewed or emanated from

More information

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from Lecture 5-3 Interference and Diffraction of EM Waves During our previous lectures we have been talking about electromagnetic (EM) waves. As we know, harmonic waves of any type represent periodic process

More information

DH2323 DGI13. Lab 2 Raytracing

DH2323 DGI13. Lab 2 Raytracing DH2323 DGI13 Lab 2 Raytracing In this lab you will implement a Raytracer, which draws images of 3D scenes by tracing the light rays reaching the simulated camera. The lab is divided into several steps.

More information

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. 11 11.1 Basics So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources. Global models include incident light that arrives

More information

Enabling immersive gaming experiences Intro to Ray Tracing

Enabling immersive gaming experiences Intro to Ray Tracing Enabling immersive gaming experiences Intro to Ray Tracing Overview What is Ray Tracing? Why Ray Tracing? PowerVR Wizard Architecture Example Content Unity Hybrid Rendering Demonstration 3 What is Ray

More information

Working with the BCC Z Space II Filter

Working with the BCC Z Space II Filter Working with the BCC Z Space II Filter Normally, if you create an effect with multiple DVE layers, each layer is rendered separately. The layer that is topmost in the timeline overlaps all other layers,

More information

Check your Odeon model

Check your Odeon model Model name: Case ID Made by: Date: QA by: Dato: Approved by: Dato: GEOMETRY Room information/ dimensions (SHIFT + CTRL + R) 3D Geometry debugger (SHIFT + CTRL + W) Are the max. x, y, z dimensions the right

More information

Introduction to Visualization and Computer Graphics

Introduction to Visualization and Computer Graphics Introduction to Visualization and Computer Graphics DH2320, Fall 2015 Prof. Dr. Tino Weinkauf Introduction to Visualization and Computer Graphics Visibility Shading 3D Rendering Geometric Model Color Perspective

More information

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment Dominic Filion, Senior Engineer Blizzard Entertainment Rob McNaughton, Lead Technical Artist Blizzard Entertainment Screen-space techniques Deferred rendering Screen-space ambient occlusion Depth of Field

More information

ENERGY-BASED BINAURAL ACOUSTIC MODELING. Natalie Agus, Hans, Anderson, Jer-Ming Chen, and Simon Lui Singapore University of Technology and Design

ENERGY-BASED BINAURAL ACOUSTIC MODELING. Natalie Agus, Hans, Anderson, Jer-Ming Chen, and Simon Lui Singapore University of Technology and Design ENERGY-BASED BINAURAL ACOUSTIC MODELING Natalie Agus, Hans, Anderson, Jer-Ming Chen, and Simon Lui Singapore University of Technology and Design Information Systems Technology and Design Technical Report

More information

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007 Programming For this assignment you will write a simple ray tracer. It will be written in C++ without

More information

History of Light. 5 th Century B.C.

History of Light. 5 th Century B.C. History of Light 5 th Century B.C. Philosophers thought light was made up of streamers emitted by the eye making contact with an object Others thought that light was made of particles that traveled from

More information

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17]

6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, :05-12pm Two hand-written sheet of notes (4 pages) allowed 1 SSD [ /17] 6.837 Introduction to Computer Graphics Final Exam Tuesday, December 20, 2011 9:05-12pm Two hand-written sheet of notes (4 pages) allowed NAME: 1 / 17 2 / 12 3 / 35 4 / 8 5 / 18 Total / 90 1 SSD [ /17]

More information

BCC Multi Stretch Wipe

BCC Multi Stretch Wipe BCC Multi Stretch Wipe The BCC Multi Stretch Wipe is a radial wipe with three additional stretch controls named Taffy Stretch. The Taffy Stretch parameters do not significantly impact render times. The

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but

More information

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Lecture 15: Shading-I. CITS3003 Graphics & Animation Lecture 15: Shading-I CITS3003 Graphics & Animation E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley 2012 Objectives Learn that with appropriate shading so objects appear as threedimensional

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Acoustic Simulation. COMP 768 Presentation Lakulish Antani April 9, 2009

Acoustic Simulation. COMP 768 Presentation Lakulish Antani April 9, 2009 Acoustic Simulation COMP 768 Presentation Lakulish Antani April 9, 2009 Acoustic Simulation Sound Synthesis Sound Propagation Sound Rendering 2 Goal Simulate the propagation of sound in an environment

More information

BCC Rays Ripply Filter

BCC Rays Ripply Filter BCC Rays Ripply Filter The BCC Rays Ripply filter combines a light rays effect with a rippled light effect. The resulting light is generated from a selected channel in the source image and spreads from

More information

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018

Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018 Ray Tracing: Special Topics CSCI 4239/5239 Advanced Computer Graphics Spring 2018 Theoretical foundations Ray Tracing from the Ground Up Chapters 13-15 Bidirectional Reflectance Distribution Function BRDF

More information

Ambi Pan & Ambi Head Manual

Ambi Pan & Ambi Head Manual AMBI HEAD AMBI PAN Ambi Pan & Ambi Head Manual v1.2 v1.2 Ambi Pan/Head Manual Ambi Pan/Head is a professional plugin suite for creating 3D audio scenes in a minute, ready to embed in 360 videos and immersive

More information

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project To do Continue to work on ray programming assignment Start thinking about final project Lighting Course Outline 3D Graphics Pipeline Modeling (Creating 3D Geometry) Mesh; modeling; sampling; Interaction

More information

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well Pre AP Physics Light & Optics Chapters 14-16 Light is an electromagnetic wave Electromagnetic waves: Oscillating electric and magnetic fields that are perpendicular to the direction the wave moves Difference

More information

TEAM 12: TERMANATOR PROJECT PROPOSAL. TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu

TEAM 12: TERMANATOR PROJECT PROPOSAL. TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu TEAM 12: TERMANATOR PROJECT PROPOSAL TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu 1. INTRODUCTION: This project involves the design and implementation of a unique, first-person shooting game. The

More information

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016 Computergrafik Matthias Zwicker Universität Bern Herbst 2016 Today More shading Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection and refraction Toon shading

More information

The use of colors, animations and auralizations in room acoustics

The use of colors, animations and auralizations in room acoustics The use of colors, animations and auralizations in room acoustics Jens Holger Rindel 1 and Claus Lynge Christensen 2 Odeon A/S, Scion DTU Diplomvej 81, DK-2800 Kgs. Lyngby, Denmark ABSTRACT The use of

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Thin Lenses 4/16/2018 1

Thin Lenses 4/16/2018 1 Thin Lenses f 4/16/2018 1 Thin Lenses: Converging Lens C 2 F 1 F 2 C 1 r 2 f r 1 Parallel rays refract twice Converge at F 2 a distance f from center of lens F 2 is a real focal pt because rays pass through

More information

Advanced Distant Light for DAZ Studio

Advanced Distant Light for DAZ Studio Contents Advanced Distant Light for DAZ Studio Introduction Important Concepts Quick Start Quick Tips Parameter Settings Light Group Shadow Group Lighting Control Group Known Issues Introduction The Advanced

More information

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Lecture - 20 Diffraction - I We have been discussing interference, the

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

4.5 Images Formed by the Refraction of Light

4.5 Images Formed by the Refraction of Light Figure 89: Practical structure of an optical fibre. Absorption in the glass tube leads to a gradual decrease in light intensity. For optical fibres, the glass used for the core has minimum absorption at

More information

Textbook Assignment #1: DUE Friday 5/9/2014 Read: PP Do Review Questions Pg 388 # 1-20

Textbook Assignment #1: DUE Friday 5/9/2014 Read: PP Do Review Questions Pg 388 # 1-20 Page 1 of 38 Page 2 of 38 Unit Packet Contents Unit Objectives Notes 1: Waves Introduction Guided Practice: Waves Introduction (CD pp 89-90) Independent Practice: Speed of Waves Notes 2: Interference and

More information

Algebra Based Physics

Algebra Based Physics Slide 1 / 66 Slide 2 / 66 Algebra Based Physics Geometric Optics 2015-12-01 www.njctl.org Table of ontents Slide 3 / 66 lick on the topic to go to that section Reflection Spherical Mirror Refraction and

More information

Virtual Reality for Human Computer Interaction

Virtual Reality for Human Computer Interaction Virtual Reality for Human Computer Interaction Appearance: Lighting Representation of Light and Color Do we need to represent all I! to represent a color C(I)? No we can approximate using a three-color

More information

PHY 171 Lecture 6 (January 18, 2012)

PHY 171 Lecture 6 (January 18, 2012) PHY 171 Lecture 6 (January 18, 2012) Light Throughout most of the next 2 weeks, we will be concerned with the wave properties of light, and phenomena based on them (interference & diffraction). Light also

More information

Simple Nested Dielectrics in Ray Traced Images

Simple Nested Dielectrics in Ray Traced Images Simple Nested Dielectrics in Ray Traced Images Charles M. Schmidt and Brian Budge University of Utah Abstract This paper presents a simple method for modeling and rendering refractive objects that are

More information

Figure 2.1: High level diagram of system.

Figure 2.1: High level diagram of system. Basile and Choudhury 6.111 Final Project: La PC-na Project Proposal 1 Introduction The luxury of purchasing separate pool tables, foosball tables, and air hockey tables is beyond the budget of many, particularly

More information

Verberate 2 User Guide

Verberate 2 User Guide Verberate 2 User Guide Acon AS Verberate 2 User Guide All rights re se rve d. No parts of this work may be re produce d in any form or by any me ans - graphic, e le ctronic, or me chanical, including photocopying,

More information

1. What is the law of reflection?

1. What is the law of reflection? Name: Skill Sheet 7.A The Law of Reflection The law of reflection works perfectly with light and the smooth surface of a mirror. However, you can apply this law to other situations. For example, how would

More information

Project report Augmented reality with ARToolKit

Project report Augmented reality with ARToolKit Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)

More information

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students) Saturday, January 13 th, 2018, 08:30-12:30 Examiner Ulf Assarsson, tel. 031-772 1775 Permitted Technical Aids None, except

More information

Measurement of 3D Room Impulse Responses with a Spherical Microphone Array

Measurement of 3D Room Impulse Responses with a Spherical Microphone Array Measurement of 3D Room Impulse Responses with a Spherical Microphone Array Jean-Jacques Embrechts Department of Electrical Engineering and Computer Science/Acoustic lab, University of Liège, Sart-Tilman

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information