ELIMINATION OF RAINDROPS EFFECTS IN INFRARED SENSITIVE CAMERA AHMAD SHARMI BIN ABDULLAH

Size: px
Start display at page:

Download "ELIMINATION OF RAINDROPS EFFECTS IN INFRARED SENSITIVE CAMERA AHMAD SHARMI BIN ABDULLAH"

Transcription

1 ELIMINATION OF RAINDROPS EFFECTS IN INFRARED SENSITIVE CAMERA AHMAD SHARMI BIN ABDULLAH A project report submitted in partial fulfillment of the requirements for the award of the degree of Master of Engineering (Electrical Electronics & Telecommunications) Faculty of Electrical Engineering Universiti Teknologi Malaysia APRIL 2007

2 To my beloved mother and father iii

3 iv ACKNOWLEDGEMENT In the Name of Allah, Most Gracious, Most Merciful. I am grateful to Allah for His guidance, and only by His strength I have successfully completed my master project and the write up on this thesis. I wish to express my sincere gratitude and appreciation to my supervisor, Associate Professor Dr. Syed Abdul Rahman Al-Attas for his invaluable guidance, assistance, advice and constructive comments throughout the accomplishment of this project. Recognition and thankfulness to Mr. Usman Ullah Sheikh and Mr. Amir for the cooperation, encouragement and inspiration they gave all along the way to the completion of this project. Finally, I would like to thank my parent for their determined support, encouragement and understanding. I am indebted to all these important peoples.

4 v ABSTRACT Surveillance systems are important part of the security systems nowadays. The traditional methods of surveillance systems involving human now have improved to automated systems. The effects of rain brought some drawback to the automated surveillance systems especially during rainy night time which degrade the performance of the tracking system. This project has proposed a method to eliminate those raindrops effects in order to improve the performance of the tracking system in the automated surveillance systems. An algorithm has been developed using MATLAB Image Processing Toolbox. Unique visual properties of raindrops are observed and analyzed which then has been manipulated into the algorithm as a mechanism for raindrops effects removal. The result produced as a comparison to its original input shows a significant raindrops effects elimination.

5 vi ABSTRAK Pemantauan atau pemerhatian merupakan suatu perkara yang penting dalam sistem keselamatan. Kaedah tradisional dalam sistem pemantauan yang melibatkan pegawai keselamatan untuk berjaga, meronda dan memerhati telah digantikan dengan sistem pemantauan automatik yang menggunakan kamera pemantau dan unit pemprosesan digital. Namun, kesan hujan telah membawa beberapa kesan buruk kepada sistem pemantauan automatik terutamanya hujan ketika waktu malam yang mana menyebabkan prestasi sistem pengesan menurun. Projek ini telah mencadangkan suatu kaedah untuk menghilangkan kesan hujan tersebut dalam usaha untuk memperbaiki prestasi sistem pengesan dalam sistem pemantauan automatik. Suatu algoritma telah dibangunkan menggunakan MATLAB Image Processing Toolbox. Ciri-ciri visual hujan yang unik diperhatikan dan dianalisis, kemudiannya dimanipulasikan ke dalam algoritma tersebut sebagai suatu mekanisme untuk menghilangkan kesan hujan. Keputusannya, imej yang telah diproses, sebagai perbandingan kepada imej yang asal menunjukkan kesan hujan telah berjaya dihilangkan dengan baik.

6 vii TABLE OF CONTENTS CHAPTER TITLE PAGE DECLARATION DEDICATION ACKNOWLEDGEMENT ABSTRACT ABSTRAK TABLE OF CONTENTS LIST OF FIGURES LIST OF SYMBOLS LIST OF ABBREVIATIONS ii iii iv v vi vii viii x xi 1 INTRODUCTION Problem Statement Objective of Project Scope of Project 2 2 LITERATURE REVIEW Real-Time Processing Analysis on Visibility of Rain Camera Parameters for Rain Removal Summary Offline Processing Physical Properties of Rain Appearance Model of Rain 14

7 viii Dynamic of Rain Photometry of Rain Detection of Rain in Video Photometrics Model Constraints Dynamics Model Constraints Removal of Rain from Video Summary 23 3 METHODOLOGY Input and Output Algorithm Development Process Observation Analysis Algorithm Experiment 33 4 RESULT AND ANALYSIS Results of Algorithms Processes st Version Algorithm nd Version Algorithm rd Version Algorithm Results of Multiple Raindrops Visual Conditions Normal Spread Raindrops Overlapping Spread Raindrops Extreme Overlapping Raindrops Analysis and Comparison 46 5 CONCLUSIONS AND FUTURE WORK Conclusion Future Work 50 REFERENCES 51

8 viii LIST OF FIGURES FIGURE NO. TITLE PAGE 2.1 Intensity fluctuations in image Pixel looking at raindrops at different distances, z Various conditions of rain scenarios Drops size distribution and shapes Temporal correlations between pixels and its neighbors The field of view of a raindrop Average irradiance at a pixel due to rain drop Positive intensity change of unit frame width at a pixel The rain detection algorithm applied to a video Components of algorithm development Input sample of algorithm development process Output sample of algorithm development process Algorithm development process Three consecutive frames of images sequence Flowchart of the algorithm Sample input scene frames of 1 st version algorithm The change of intensity, ΔI 35

9 ix 4.3 The artifact of background objects The artifact of raindrop The output of 1 st version algorithm Sample input scene frames of 2 nd version algorithm The change of intensity, ΔI The artifact of background objects The artifact of raindrop The output of 2 nd version algorithm Sample input scene frames of 3 rd version algorithm The change of intensity, ΔI The artifact of background objects The artifact of raindrop The output of 3 rd version algorithm Sample frames of Normal Spread Raindrops condition Sample frames of Overlapping Spread Raindrops condition Sample frames of Extreme Overlapping Raindrops condition Results and Intensity Profiles of Normal Spread Raindrops Results and Intensity Profiles of Overlapping Spread Raindrops Results and Intensity Profiles of Extreme Overlapping Raindrops 45

10 x LIST OF SYMBOLS a - Radius b c - Diameter of defocus kernel (blue circle) c - Threshold value E - Irradiance f - Focal length I - Intensity k - Camera gain L - Luminance N - F-number n - Frame number r - Spatial coordinate R - Temporal correlation T - Camera exposure time t - Time v - Velocity w - Width z - Distance β - Slope Δ - Different τ - Time

11 xi LIST OF ABBREVIATIONS AVI - Audio Video Interleave NVD/NVDs - Night Vision Device/s RGB - Red blue green

12 1 CHAPTER 1 INTRODUCTION Automatic surveillance system is an important system since it involves the security and safety of the surrounding automatically. One of the important features in an automatic surveillance system is the ability of the system to automatically tracking the objects of interest in the scene. This system involves the used of surveillance camera and digital image processing unit instead of human to monitor the surrounding area of interest and it is proven to perform better than the human to some extent. The surveillance camera used is the infrared sensitive camera that has night vision built right in. Night vision or sometimes called night vision devices (NVDs) rely on a special tube, called an image-intensifier tube, to collect and amplify infrared and visible light. A projection unit, called an IR Illuminator, is attached to the NVD. The unit projects a beam of near-infrared light, similar to the beam of a normal flashlight. Invisible to the naked eye, this beam reflects off objects and bounces back to the lens of the NVD and eventually make the camera to see at night.

13 2 1.1 Problem Statement The ability of infrared sensitive camera to see at night does bring some problems to the tracking system. One of the major problems encountered in detecting moving objects at night time using infrared sensitive camera is the presence of raindrops. Due to its reflective surface, raindrops, especially those near the camera lens, will appear as very bright moving objects. As a consequence, these raindrops will be detected as valid moving objects which in return increasing the false detection rate of the tracking system. 1.2 Objective of Project The objectives of this project are to develop, simulate and analyze an algorithm that will remove raindrops effects using MATLAB Image Processing Toolbox and to discriminate the raindrops effects from the scene captured by the infrared sensitive camera so that effective detection and tracking of moving objects can be undertaken. 1.3 Scope of Project This project will make use of the MATLAB Image Processing Toolbox as the algorithm development platform. Images sequence captured by an infrared sensitive camera is used as the input material for the development process. This images sequence is a night scene of moving objects with the interference of moderate rain condition. The processing will be done offline where the input material is first

14 3 captured before being processed. The process will involve the frame level processing where frame by frame is observed and analyzed in order to develop an algorithm for raindrops effects elimination.

15 CHAPTER 2 LITERATURE REVIEW Outdoor vision systems are used for various purposes such as tracking, recognition, navigation and etc. [K. Garg, 2004]. These systems rely on the performance of the processing technique used, so that making them to work properly. Therefore it is essential to have clear vision so that the performance of the systems could be maintained. However, all those systems are currently designed without account the various weather conditions. Rain, snow, fog and mist are the typical weather conditions that need to be taken into account while designing the outdoor vision systems. It is because these weather conditions severely degrade the quality of the images captured in the scene. As consequence, the vision systems will fail to work properly. In order to develop vision systems that perform under all weather conditions, it is essential to model the visual effects of the various weather conditions and develop algorithms to remove them. Weather conditions vary widely in their physical properties and in the visual effects they produce in images. Based on their differences, weather conditions can be broadly classified as steady (fog, mist and haze) or dynamic (rain, snow and hail). In the case of steady weather, individual droplets are too small (1 10 um) to be visible to a camera, and the intensity produced at a pixel is due to the aggregate effect of a large number of droplets within the pixel s solid angle. Hence, volumetric scattering

16 5 models such as attenuation and airlight [E.J. McCartney, 1975] can be used to adequately describe the effects of steady weather. Algorithms [S.K. Nayar, 2002] have been recently developed to remove the effects of steady weather from images. On the other hand, the constituent particles of dynamic weather conditions such as rain, snow and hail are larger ( mm) and individual particles are visible in the image. An example of rain is that an individual raindrops cause the long white streaks to appear in the images. Here, aggregate scattering models previously used for steady conditions are not applicable. The analysis of dynamic weather conditions requires the development of stochastic models that capture the spatial and temporal effects of a large number of particles moving at high speeds (as in rain) and with possibly complex trajectories (as in snow) [K. Garg, 2004]. There are a few research have been done regarding the problem of the dynamic weather condition in images. Here, the discussion will focus on the images corrupted by the rain effects problem. Based on a number of readings done, there are two ways of implementing the raindrops effects elimination. The two ways are the real-time processing technique and the offline processing technique. Each technique has its own advantages and disadvantages. 2.1 Real-Time Processing The real-time processing technique is a process that is done at the time which the images are being captured. That is why it is named real time processing, where no more post processing required for the images. Based on a research done regarding this image processing technique, the process involved the exploitation of a few cameras parameters depending on the properties of rain and the brightness of the

17 6 scene. In order to make sure that those parameters will be properly exploited, the properties of rain and the brightness of the scene are important to be well understood. Rain produces sharp intensity fluctuations in images and videos, which degrade the performance of outdoor vision systems. These intensity fluctuations depend on the camera parameters, the properties of rain and the brightness of the scene. The properties of rain which are its small drop size, high velocity and low density, make its visibility strongly dependent on camera parameters such as exposure time and depth of field. These parameters can be selected so as to reduce or even remove the effects of rain without altering the appearance of the scene. Conversely, the parameters also can be set to enhance the visibility of rain. Since this technique required exploitation of the cameras parameter, therefore, the important key to this work is to get control of those parameters during image acquisition. In many outdoor vision settings, it is appeared that those parameters can easily being controlled and manipulated. Is that so, then the work proceed to the following key of contributions Analysis on Visibility of Rain Rain consists of a large number of drops falling at high speed. These drops produce high frequency spatio-temporal intensity fluctuations in videos. The relation of the visibility of rain to the camera parameters, the properties of rain and the scene brightness can be derived with an analytical expression. To do this, the intensities produces by individual drops will be first modeled then followed by considering the effects due to a volume of rain.

18 7 Figure 2.1 Intensity fluctuations in image. Raindrops fall at high velocities relative to the exposure time of the camera, producing severely motion-blurred streaks in images. Also due to the limited depth of field of a typical camera, the visibility of rain is significantly affected by defocus. For deriving the intensities produced by motion-blurred and defocus, the camera is assumed to have linear radiometric response. The intensity I at a pixel is related to the radiance L as, I π 1 = k, 2 4 N TL (2.1) where, k is the camera gain, N is the F-number and T is the exposure time. The gain can be adjusted so that image intensities do not depend on specific N and T settings. This implies that k should change such that k 0 is constant, where, k 0 π T = k. (2.2) 2 4 N Therefore, the image intensity can be written as I = k 0 L.

19 8 Figure 2.2 Pixel looking at raindrops at different distances, z. Now, the change in intensities produced by motion-blurred could be derived based on Figure 2.2., where its shows the change in intensity produced by a falling raindrop is a function of the drops distance z from the camera. The change in intensity Δ I produced by these drops is given by ΔI = I r I b = k τ ( L T L 0 r b ), (2.3) where, I r is the motion-blurred intensity at a pixel affected by rain, and I b = k 0 L b is the background intensity. L r and L b are the brightness of the raindrop and the background, respectively, and T is the exposure time of the camera. τ 2a / v is the time that a drop stays within the field of view of a pixel and v is the drops fall velocity. The equation shows that change in intensity produced by drops in region z<z m decreases as 1/T with exposure time and does not depend on z. On the other hand, the change in intensity produced by drops far from camera that is z>z m is given by

20 9 2 4 fa 1 Δ I = k0 ( L r Lb ). (2.4) zv T It is shown that the change in intensity Δ I now depends on drops distance from the camera and decreases as 1/z. However, for distances greater than Rz m, ΔI is too small to be detected by the camera. Therefore, the visual effects of rain are only due to raindrops that lie close to the camera (0<z<Rz m ) which referred to as the rain visible region. If the motion-blurred effects are related to the drops distance from the camera, the defocus effects on the other hand, is related to the limited depth of field (focal length) of the camera. It can be approximated as a spreading of change in intensity produced by a focused streak uniformly over the area of a defocused streak. Hence, the change in intensity intensity ΔI d ΔI of a focused streak as due to a defocused drop is related to the change in A w( vit ) Δ I d = ΔI = ΔI, (2.5) d A ( w + b )( v T + b ) c i c where, A and A d are the areas of the focused and the defocused rain streak, respectively, w is the width of the focused drop in pixels, b c is the diameter of the defocus kernel (blur circle), v i is the image velocity of the drop, and T is the exposure time of the camera. Since raindrops fall at high velocity, it can be assume that v i T>>b c. Hence, the above expression simplifies to w Δ I d = ΔI. (2.6) w + b c Therefore, the intensity change produced by a defocused and motion-blurred raindrop can be derived by simply substituting ΔI from equation (2.3) for drop that lies close to the camera (z<z m ).

21 10 ΔI d w τ = ( L ). r Lb w + b T c (2.7) and substituting w=1 and Δ I from equation (2.4) for drop that lies in the region z>z m. 2 1 fa 1 Δ I d = ( Lr Lb ). (2.8) b + 1 zv T c Camera Parameters for Rain Removal There are a few ways of manipulating the cameras parameters that have been discussed before to remove raindrops effects. However, not all of those parameters need to be manipulated at the same time. Parameters that need to be set are dependent on the condition of the scene to be captured. Various conditions of scene that need to be accounted are such as scene with fast moving objects or slow moving objects, scene that is far from camera or close to the camera and scene with heavy rain. Figure 2.3 shows some common scenarios where rain produces strong effects and the result on rain removal by manipulating the camera parameters. Note that in all these cases the effects of rain were reduced during image acquisition and no postprocessing was needed. Also, the visual effects of rain were reduced without affecting the scene appearance.

22 Figure 2.3 Various conditions of rain scenarios. 11

23 Summary This work has derived analytical expressions that show how the visibility of rain is affected by factors such as camera parameters, properties of rain and the brightness of scene. It is shown that the strong dependence of the visibility of rain on camera parameters can be exploited to provide a simple and effective method to reduce the effects of rain during image acquisition. However, this method is not as effective in reducing rain from scenes with very heavy rain or scenes with fastmoving objects that are close to the camera. In such cases, post-processing might be required to remove its effects. 2.2 Offline Processing Offline processing technique is a process that is done after all the entire image sequences have been captured. Those images then will be analyzed to acquire the suitable process algorithm needed to enhance the images. Based on a research done using this technique, the processes involve are the detection process and the removal process of raindrops effects in image sequence. Both processes required the comprehensive analysis of the visual effects of rain on imaging systems through understanding of the raindrops physical properties and characteristics such as spatial distribution, shapes, size and velocities. [K. Garg, 04] Rain consists of a distribution of a large number of drops of various sizes, falling at high velocities. Each drop behaves like a transparent sphere, refracting and reflecting light from the environment towards the camera. An ensemble of such drops falling at high velocities results in time varying intensity fluctuations in images and videos. In addition, due to the finite exposure time of the camera, intensities due to rain are motion blurred and therefore depend on the background. Thus, the visual

24 13 manifestations of rain are a combined effect of the dynamics of rain and the photometry of the environment Physical Properties of Rain Rain is a collection of randomly distributed water droplets of different shapes and sizes that move at high velocities. The physical properties of rain have been extensively studied in atmospheric sciences. The size of a raindrop typically varies from 0.1 mm to 3.5 mm. The density of drops decreases exponentially with the drop size. The shape of a drop can be expressed as a function of its size. Smaller raindrops are generally spherical in shape while larger drops resemble oblate spheroids. Figure 2.4 Drops size distribution and shapes. In a typical rainfall, most of the drops are less than 1 mm in size. Hence, most raindrops are spherical. Therefore, this approximation in size is used to model the raindrops. As a drop falls through the atmosphere, it reaches a constant terminal velocity. The terminal velocity v of a drop is also related to its size a and is given by v = 200 a, (2.9) where a is in meters and v is in meters/s.

25 14 The individual raindrops are distributed randomly in 3D space. This distribution is usually assumed to be uniform. Moreover, it can be assumed that the statistical properties of the distribution remain constant over time. These assumptions are applicable in most computer vision scenarios Appearance Model of Rain The complex spatial and temporal intensity fluctuations in images produced by rain depend on several factors that are drop distribution and velocities, environment illumination and background scene, and the intrinsic parameters of the camera. In order to model the appearance of rain, firstly, the correlation model that captures the dynamics of rain based on the distribution and velocities of raindrops is developed. Then followed by develop the physics-based motion blur model that describes the brightness produced by streaks of rain Dynamic of Rain The dynamics property of rain is useful to detect rain and its direction. This is done by computing the temporal correlation between pixels and its neighbors. The temporal correlation between pixels and any neighborhood is high in the direction of rain. Therefore, the direction of rain can be determined.

26 15 In order to do this, the image projections of the drops are considered but not their intensities. Thus, the dynamics of rain may be represented by a binary field 1, if drop projects to location r at time t; b(r, t ) = (2.10) 0, otherwise, where r represents the spatial coordinates in the image and t is time. Initially, both space and time is considered to be continuous and the drops distribution in a volume is assumed to be uniform over space and time. Under this condition, the binary field b( r, t) is wide sense stationary in space and time. This implies that the correlation function depends only on difference in time Δ t = t 1 t ). That is: ( 2 L 1 Rb ( r1, t1; r2, t2 ) b( r1, t1 + t) b( r2, t2 + t) dt = Rb ( Δr, Δt), L (2.11) 0 where, the correlation R b is computed over a large time period [0,L]. ( Δr, Δt) can R b be computed by measuring the temporal correlation with time lag Δt between the values of the binary field at points r and r + Δr. An important constraint arises due to the straight line motion of the drops. Consider a drop that falls with image velocity v i. After time Δt, the displacement of this drop is v i Δ t. Hence, the binary field at time instants t and t + Δt are related as b( r + vi Δt, t + Δt) = b( r, t). (2.12) As a result, the correlation R ( r, t; r + v Δt, t + Δt) is high. From equation 2.11, yield b i R ( r, t; r + v Δt, t + Δt) = R ( v Δt, Δt). (2.12) b i b i

27 16 This implies that the values of the binary field b at any two image coordinates, separated by v i Δt in space are correlated with time lag Δ t. This is illustrated in Figure 2.5. Figure 2.5 Temporal correlations between pixels and its neighbors Photometry of Rain Raindrops behave like lenses refraction and reflecting scene radiances towards the camera. They have a large field of view of approximately 165º and the incident light that is refracted towards the camera is attenuated by only 6%. Figure 2.6 The field of view of a raindrop.

28 17 made. Based on these optical properties of a drop, the following observations can be Raindrops refract light from a large solid angle of the environment (including the sky) towards the camera. Specular and internal reflections further add to the brightness of the drop. Thus, a drop tends to be much brighter than its background (the portion of the scene it occludes). The solid angle of the background occluded by a drop is far less than the total field of view of the drop itself. Thus, in spite of being transparent, the average brightness within a stationary drop (without motion-blur) does not depend strongly on its background. Falling raindrops produce motion-blurred intensities due to the finite integration time of a camera. These intensities are seen as streaks of rain. Unlike a stationary drop, the intensities of a rain streak depend on the brightness of the (stationary) drop as well as the background scene radiances and integration time of the camera. Consider a video camera with a linear radiometric response and exposure (integration) time T, observing a scene with rain. In order to determine the intensity, I d produced at a pixel affected by a raindrop, the irradiance of the pixel over the time duration T need to be examined. Figure 2.7 Average irradiance at a pixel due to rain drop.

29 18 Figure 2.7 shows a raindrop passing through a pixel within the time interval [t n, t n + T]. The time that a drop projects onto a pixel, τ is far less than T. Thus, the intensity Id is a linear combination of the irradiance E bg due to the background of the drop and the irradiance E d due to the drop itself: T I ( r) E dt + E dt. (2.13) d = τ d 0 τ bg If the motion of the background is slow, E bg can be assumed to be constant over exposure time T. Then, the above equation simplified to I d 1 τ = τ E d + ( T τ ) Ebg, E d = Ed dt, (2.14) τ 0 where, E d is the time-averaged irradiance due to the drop. The pixel that does not observe a drop is denote as I bg, where I bg = E bg T. Thus, the change in intensity a pixel due to a drop is Δ I at ΔI = I I = τ E d E ). (2.15) d bg ( bg The raindrops are much brighter than their backgrounds. Thus, positive. By substituting I bg = E bg T in equation (2.15), relation between E d > E bg and Δ I is Δ I and I bg is τ ΔI = β I bg + α, β =, α = τ E d. (2.16) T The time τ for which a drop remains within a pixel is a function of the physical properties of the drop (size and velocity). It is constant and hence β are also constant for all pixels within a streak. In addition, since the brightness of the (stationary) droop is weakly affected by the background intensity, the average irradiance E d can be assumed to be constant for pixels that lie on the same streak. Thus, the change in intensities Δ I observed at all pixels along a streak are linearly related to the background intensities I bg occluded by the streak.

30 19 Approximate maximum value of τ is 1.18 ms, which is much less than the typical exposure time T 30 ms of a video camera. As a result, the slope β is shown to lie within the range 0 < β < Based on these bounds, the following observations can be made: The time a drop stays at a pixel is less than the integration time of a typical video camera. Thus, a drop produces a positive intensity change ( Δ I > 0) of unit frame width at a pixel as illustrated in Figure 2.8. The change in intensities observed at all pixels along a rain streak are linearly related to the background intensities I bg occluded by the streak. The slope β of this linear relation depends only on the physical properties of the raindrop. This can be used to detect rain streaks. Figure 2.8 Positive intensity change of unit frame width at a pixel Detection of Rain in Video Based on the dynamics and photometric models of rain, a robust algorithm to detect (segment) regions of rain in videos is developed. Although those models explicitly do not take into account scene motions, but they provide strong constraints which are sufficient to disambiguate rain from other forms of scene motions.

31 Photometrics Model Constraints Consider a video of a scene captured in rain such as the one shown in Figure 2.9. The candidate pixels affected by rain in each frame of the video is detected using the photometric model constraints derived in section It was shown that a drop produces a positive intensity fluctuation of unit frame duration. Hence, to find candidate rain pixels in the n th frame, the only intensities need to be considered are I n 1, In and I n+1 at each pixel corresponding to the 3 frames n-1, n and n+1, respectively (see Figure 2.8). If the background remains stationary in these three frames, then the intensities and must be equal and the change in intensity I n 1 I n+ 1 ΔI due to the raindrop in the n th frame must satisfy the constraint Δ I = In In 1 = In In+ 1 c, (2.17) where c is a threshold that represents the minimum change in intensity due to a drop that is detectable in the presence of noise. The result of applying this constraint with c = 3 gray levels is shown in Figure 2.9(a). The selected pixels (white) include almost all the pixels affected by rain. In the presence of object motions in the scene, the above constraint also detects several false positives. Some of the false positives can be seen in and around the moving person in Figure 2.9(a). In order to reduce such false positive, the photometric constraint in equation 2.16 is applied as followed. The intensity change Δ I along a streak for each individual streak in frame n is to be verified whether it is linearly related to the background intensity I n 1 using equation The slope β of the linear fit is estimated. Streaks that do not satisfy the linearity constraint, or whose slopes lie outside the acceptable range of β [ ], are rejected.

32 21 Figure 2.9(b) shows a significant decrease in false positives after applying this constraint. By applying these constraints to all the frames, an estimate of the binary rain field b is obtained (see Figure 2.9(c)) Dynamics Model Constraints Although a significant reduction in false positives is achieved using the photometric constraint, some false positives is remained. Therefore, the dynamics constraint is applied to further reduce the false positives. In section , it was shown that in a binary field produced by rain, strong temporal correlation exists between neighboring pixels in the direction of rain. Using the estimated binary field b, the zero th order temporal correlation R b of a pixel is computed with each of its neighbors in a local ( l l) neighborhood, over a set of frames { n, n 1,..., n f }. Figure 2.9(d) shows the correlation values obtained for all ( 11 11) neighborhoods in frame n, computed using the previous f = 30 frames. Bright regions indicate strong correlation. The direction and strength of correlation is computed for each neighborhood center which is depicted in Figure 2.9(e) as a needle map. The direction of the needle indicates the direction of correlation (direction of the rainfall) ant its length denotes the strength of correlation (strength of the rainfall). The needle map is kept sparse for clarity.

33 22 Figure 2.9 The rain detection algorithm applied to a video. Weak and non-directional correlations occur at pixels with no rain and hence are rejected. Thus, constraints of the photometric and dynamics models can be used to effectively segment the scene into regions with and without rain, even in the presence of complex scene motions Removal of Rain from Video Once the video is segmented into rain and non-rain regions, the following simple method is applied to remove rain from each frame of the video. The intensity I n for each pixel with rain in the n th frame is replaced with an estimate of the background obtained as ( I I 1) / 2 (see Figure 2.8). This step removes most of n 1 + n + the rain in the frame. However, since drop velocities are high compared to the exposure time of the camera, the same pixel may see different drops in consecutive frames. Such cases are not accounted for by detection algorithm. Fortunately, the probability of raindrops affecting a pixel in more than three consecutive frames is negligible. In the case of a pixel being affected by raindrops in 2 or 3 consecutive frames, rain is removed by assigning the average of intensities in the two neighboring pixel (on either side) that are not effected by raindrops. This additional step can be very effective for rain removal.

34 Summary This work has developed a comprehensive model for the visual appearance of rain. Base on this model, efficient algorithms for the detection and removal of rain from videos is presented. Note that simple temporal filtering methods are not effective in removing rain since they are spatially invariant and hence degrade the quality of the image in regions without rain. In contrast, the method in this work explicitly detects pixels affected by rain and removes the contribution of rain only from those pixels, preserving the temporal frequencies due to object and camera motions.

35 CHAPTER 3 METHODOLOGY Processing time varying image sequences to preserve or enhance the visibility of important parts of the image, which degraded by moderate raindrops effect can be done using an algorithm which is capable to manipulate raindrops visual properties in order to eliminate its effects. Developing an algorithm that will have the capability to manipulate visual properties of raindrops involve components shown in Figure 3.1. Input Material: Scene with raindrops effects Algorithm Development Process: Raindrops Effects Removal Output: Scene without raindrops effect Figure 3.1 Components of algorithm development.

36 25 This chapter will discuss the approach and methodology that is used to accomplish objectives defines for this project. There are three components that are involved as illustrated in Figure 3.1. The first component is the input material; second component is the process of developing the algorithm; and the third component is the output produced by the implementation of the algorithm developed. These three components are interconnected with each other. The input material needs to be processed to eliminate the raindrops effects. The algorithm development process needs both input material and output produced in the development process. Then the output needs input material so as to compare how much the process has enhanced the visibility of the object of interest and eliminates the raindrops effects. Next section will detail those three components. 3.1 Input and Output As an input to the algorithm development process, the material used serves as a sample or a specimen. The sample or the specimen used is confined within certain scope. The scope of the project is to assure that the input material used for the development process is the scene captured by an infrared sensitive surveillance camera. This scene is an outdoor night scene with the appearance of moving object. The moving object is the object of interest for motion tracking system. The interference in the scene is caused by moderate rain condition. The output resulting from the implementation of the algorithm developed is important as it serves as a feedback signal to the development process. The output is compared to its original input to find out how good the algorithm that has been developed done its job. Then, any necessary improvement could be made to the

37 26 algorithm based on the observation and the analysis on the output produced as comparison to its original input. Figure 3.2 Input sample of algorithm development process. Figure 3.2 shows a few frames that are used as the input sample to the algorithm development process. The input sample shows a night scene which seems to have the objects of interest and the scene is corrupted by moderate rain condition. While Figure 3.3 shows the output sample of the algorithm development process. They are the same scene shown in Figure 3.2. However, they have been clean up from the effects of raindrops. Figure 3.3 Output sample of algorithm development process.

38 Algorithm Development Process Developing an algorithm for eliminating the effects of raindrops in image sequence involves several procedures as shown in Figure 3.4. Observation Analysis Algorithm Experiment Figure 3.4 Algorithm development process Observation Using the sample scene provided (AVI video format), a few thousand frames of image sequence were extracted from the scene for an approximate five minutes of video times. Based on these frames, a thorough observation is done and the target is to find any unique visual properties of raindrops effects that can be manipulated further to reduce or eliminate the effects itself. From the observation, there are a few

39 28 visual properties that are unique to describe the raindrops effect in the image sequence. Those unique visual properties are as follows. First, raindrops appear in image at a very high intensity. The intensity difference is large compared to the background intensity which makes it obvious to the observer. Second, raindrops appear in image at a very high velocity. As consequences, it tends to appear as a white streak across the screen, and it also appears once in consecutive images sequence. Third, raindrops that appear in the image are the drops that are closer to the camera lens, while the drops farther away seem to be invisible Analysis The three unique properties of the raindrops effects found earlier have the potential to be manipulated further to eliminate the effects of raindrops in the image sequence. However, an analysis needs to be done to those three properties, on possible approach for image enhancement. It has been found out that these raindrops also have the following characteristics. Firstly, it is found that the average intensity of a frame that has raindrops effect would be different from the average intensity of either the previous or the next one. This has been measured with and without the appearance of other moving objects in the images sequence. This exactly resembles equation 2.17 as follows Δ I = In In 1 = In In+ 1 c, (3.1)

40 29 where c is a threshold that represents the minimum change in intensity due to a drop that is detectable in the presence of noise. (n-1) th frame n th frame (n+1) th frame Figure 3.5 Three consecutive frames of images sequence. Secondly, it is found that no identical raindrop appears in consecutive frames. It means that there is no raindrop which has the same shape with the same intensity and appears at the same location throughout the entire images sequence. The appearance of a particular raindrop only occurred once in consecutive frames. Meaning that the raindrop that is absent in (n-1) th frame and then appears in n th frame would disappear in the (n+1) th frame. Finally, it is found that the object of interest appears in consecutive frames. However, they moved to a slightly different location in every frame. This phenomenon happened because of the finite exposure time of the camera. The object of interest is not fast enough compared to the camera exposure time, so it appears in every frame only at slightly different location. Raindrops are fast compared to the camera exposure time, so in tend to appear only once in consecutive frames. Once the analysis is done, an inference could be made that the difference of intensity between a frame and its consecutive frames, either previous or next frame, is the result of; firstly, the intensity of the raindrops that appear in that frame, and secondly, the intensity difference caused by the changing in location of object of

41 30 interest. Therefore, manipulating these factors into an algorithm would be an advantage to eliminate the raindrops effects Algorithm An algorithm is developed using MATLAB Image Processing Toolbox which manipulates the result of the analysis on raindrops unique properties. Start Input: Scene with raindrops effects Frame Extraction (1 st, 2 nd, 3 rd,, (n-1) th, n th, (n+1) th,, k th frame) Obtain Difference of Intensity For (n-1) th, n th, (n+1) th frame, ΔI 1 = I n - I n-1 ΔI 2 = I n - I n+1 ΔI 1 and ΔI 2 are equal if there is no movement of background object in the image sequence. C A

42 31 C A Obtain Artifact of Background Object ΔI 12 = ΔI 1 - ΔI 2 ΔI 21 = ΔI 2 - ΔI 1 n = n + 1 Obtain Artifact of Raindrops ΔI 1(new) = ΔI 1 - ΔI 12 ΔI 2 (new) = ΔI 2 - ΔI 21 Now, ΔI 1(new) and ΔI 2(new) are equal. Obtain the Output Output: Scene without raindrops effects I n(new) = I n - ΔI 1(new) or I n(new) = I n - ΔI 2(new) End Figure 3.6 Flowchart of the algorithm. The algorithm illustrated in Figure 3.6 is an RGB image processing algorithm which will process one frame at one execution. This process will continue indefinitely whereby at the end of each loop, the term n th frame will be shifted to the next frame, n th = (n+1) th, and this will continuously take place until n+1 equal to the last number of frame of the video.

43 32 The algorithm begins with the frame extraction process which has a function to convert the input AVI video format to a number of frames. This process is done once throughout the whole processing. Next step is to obtain the difference intensity between current frame, and its adjacent frames which are the previous and the next frames. The symbols used for current, previous and next frames are n th, (n-1) th and (n+1) th frames respectively. If there is no movement of background object in the images sequence, the results of the process, ΔI 1 and ΔI 2 are equal since the difference is only cause by the raindrops. However, the possibility is very small due to the location of the surveillance camera is on a tall building facing down a road and a parking space in front of the building. Therefore, the results will contain the difference of intensity caused by the raindrops and the movement of background object in the scene. The third process is to obtain the artifact of background object. This is because the movement of the object in the scene affects the difference of intensity obtained from the previous process. This process will extract that artifact to be used in the next process. The fourth process is to obtain the artifact of raindrops. The second process produced a difference of intensity containing both the artifact of raindrops along with the artifact of background object. The third process has extracted the artifact of background object from the result of second process. Therefore, in this process, the artifact of raindrops can be extracted by subtracting the result of the third process from the result of the second process. Finally is the process to obtain the output. Output is obtained by subtracting the artifact of raindrops extracted from previous process from the currently processed frame. The result would be a new n th frame without raindrops in the image.

44 Experiment Once the algorithm is developed, experiments are performed in order to make sure whether any improvement is necessary to the algorithm. Result produced by the implementation of the algorithm is observed and analyzed. A few changes have been made to the algorithm based on the observation and the analysis done. This section will summarize the observation and the analysis that have been done on the result produced by the changes made on the algorithm, and the discussion also will go through the changes that have been made to the algorithm itself. Observation and the analysis are done by comparing the results obtained with the original input. A few things that are observed include how clear the raindrops effects have been eliminated from the scene, how good the object of interest has been enhanced in the image, is there any unwanted artifact emerged within the image, and has there any part of object of interest accidentally been deleted by the algorithm. All these observations are done thoroughly before analysis on the possibility to improve the performance is done. From the observation and the analysis, three versions of algorithm have been developed including the first version which originally developed from the beginning. The other two versions were modified from the first version. The modifications involved the process to obtain the difference of intensity. The frames use for each processing are changed. Instead of using previous and next frame as (n-1) th and (n+1) th, they are changed to (n-2) th and (n+2) th as the previous and next frame for the second version algorithm. While for the third version algorithm, the frames use for each processing are changed to (n-3) th and (n+3) th as the previous and next frame instead of using previous and next frame as (n-1) th and (n+1) th. Result and analysis of these three versions of algorithm will be discussed in detail in the next chapter.

45 CHAPTER 4 RESULT AND ANALYSIS Three versions of algorithm have been developed, and each has its own advantages and disadvantages. This chapter will present the results of the three versions of algorithm, start with the result of algorithms processes, and then the result of multiple raindrops visual conditions versus multiple versions algorithm, and at the end is the analysis on performance and comparison between the three versions. 4.1 Results of Algorithms Processes Step by step output produced by each process of the algorithm will be presented in this section. The processes involved are as follows; first is to obtain the difference of intensity, second is to obtain the artifact of background object, third is to obtain the artifact of raindrops and fourth is to obtain the output. The scene which is used as the input to all the algorithms is the same and the sample frames extracted from the scene as the aid of this presentation are also at the same point. The features that are highlighted in the sample frames are the presence of moving object including the object of interest and the interference of raindrops.

46 st Version Algorithm Figure 4.1 shows the sample frames extracted from the scene as an input to the 1 st version algorithm. The frames that are used in each processing step of the algorithm which acting as the current frame, previous frame and the next frame are the n th frame, (n-1) th frame and the (n+1) th frame respectively. (n-1) th n th (n+1) th Figure 4.1 Sample input scene frames of 1 st version algorithm. follows: The results produced by each level of processing are shown step by step as The change of intensity is obtained. ΔI 1 ΔI 2 Figure 4.2 The change of intensity, ΔI.

47 36 The artifact of background objects is obtained. ΔI 12 ΔI 21 Figure 4.3 The artifact of background objects. The artifact of raindrops is obtained. Figure 4.4 The artifact of raindrop. The output is obtained. Original Processed Figure 4.5 The output of 1 st version algorithm.

48 nd Version Algorithm Figure 4.6 shows the sample frames extracted from the scene as an input to the 1 st version algorithm. The frames that are used in each processing step of the algorithm which acting as the current frame, previous frame and the next frame are the n th frame, (n-2) th frame and the (n+2) th frame respectively. (n-2) th n th (n+2) th Figure 4.6 Sample input scene frames of 2 nd version algorithm. follows: The results produced by each level of processing are shown step by step as The change of intensity is obtained. ΔI 1 ΔI 2 Figure 4.7 The change of intensity, ΔI.

49 38 The artifact of background objects is obtained. ΔI 12 ΔI 21 Figure 4.8 The artifact of background objects. The artifact of raindrops is obtained. Figure 4.9 The artifact of raindrop. The output is obtained. Original Processed Figure 4.10 The output of 2 nd version algorithm.

50 rd Version Algorithm Figure 4.11 shows the sample frames extracted from the scene as an input to the 1 st version algorithm. The frames that are used in each processing step of the algorithm which acting as the current frame, previous frame and the next frame are the n th frame, (n-3) th frame and the (n+3) th frame respectively. (n-3) th n th (n+3) th Figure 4.11 Sample input scene frames of 3 rd version algorithm. follows: The results produced by each level of processing are shown step by step as The change of intensity is obtained. ΔI 1 ΔI 2 Figure 4.12 The change of intensity, ΔI.

51 40 The artifact of background objects is obtained. ΔI 12 ΔI 21 Figure 4.13 The artifact of background objects. The artifact of raindrops is obtained. Figure 4.14 The artifact of raindrop. The output is obtained. Original Processed Figure 4.15 The output of 3 rd version algorithm.

52 Results of Multiple Raindrops Visual Conditions Three visual conditions of raindrops are experimented using the three versions of algorithm developed. The objectives are to analyze the robustness and the reliability of the algorithms, to compare the performance of the algorithms in various raindrops visual conditions and to observe for any improvement necessary to the algorithms as an attempt in the future works. This section will show the results of the experiments in a form of comparison between algorithms Normal Spread Raindrops This is a visual condition of raindrops which the drops visually appear to be distributed in an even way in a frame and in a sequence of frames. There is no overlapping between the drops in a sequence of frames, especially with the consecutive frames. Sample frames of the raindrops are shown in Figure The results of processing and their intensity profiles are shown in Figure Overlapping Spread Raindrops This is a visual condition of raindrops which the drops visually appear to be distributed in an even way in a frame but there is a little overlapping occurred in a sequence of frames, especially between consecutive frames. Sample frames of the raindrops are shown in Figure The results of processing and their intensity profiles are shown in Figure 4.20.

Detection and Removal of Rain from Videos

Detection and Removal of Rain from Videos Detection and Removal of Rain from Videos Kshitiz Garg and Shree K. Nayar Department of Computer Science, Columbia University New York, 10027 Email: {kshitiz,nayar}@cs.columbia.edu Abstract The visual

More information

Rain Removal in a Video Sequence

Rain Removal in a Video Sequence Li Hao Qi Yingyi Zhang Xiaopeng Presentation of CS4243 Project Outline Objective 1 Objective 2 3 Objective of the project Dimming or removing the rain streaks in the video. Our observations Objective It

More information

Vision and Rain. KSHITIZ GARG AND SHREE K. NAYAR Department of Computer Science, Columbia University, New York, NY, 10027

Vision and Rain. KSHITIZ GARG AND SHREE K. NAYAR Department of Computer Science, Columbia University, New York, NY, 10027 International Journal of Computer Vision c 2007 Springer Science + Business Media, LLC. Manufactured in the United States. DOI: 10.1007/s11263-006-0028-6 Vision and Rain KSHITIZ GARG AND SHREE K. NAYAR

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM FOR CONVOLUTION OR CORRELATION BASED IMAGE PROCESSING ALGORITHMS SAYED OMID AYAT

HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM FOR CONVOLUTION OR CORRELATION BASED IMAGE PROCESSING ALGORITHMS SAYED OMID AYAT HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM FOR CONVOLUTION OR CORRELATION BASED IMAGE PROCESSING ALGORITHMS SAYED OMID AYAT UNIVERSITI TEKNOLOGI MALAYSIA HARDWARE AND SOFTWARE CO-SIMULATION PLATFORM

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

AN IMPROVED PACKET FORWARDING APPROACH FOR SOURCE LOCATION PRIVACY IN WIRELESS SENSORS NETWORK MOHAMMAD ALI NASSIRI ABRISHAMCHI

AN IMPROVED PACKET FORWARDING APPROACH FOR SOURCE LOCATION PRIVACY IN WIRELESS SENSORS NETWORK MOHAMMAD ALI NASSIRI ABRISHAMCHI AN IMPROVED PACKET FORWARDING APPROACH FOR SOURCE LOCATION PRIVACY IN WIRELESS SENSORS NETWORK MOHAMMAD ALI NASSIRI ABRISHAMCHI A thesis submitted in partial fulfillment of the requirements for the award

More information

HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEM CHIN TECK LOONG UNIVERSITI TEKNOLOGI MALAYSIA

HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEM CHIN TECK LOONG UNIVERSITI TEKNOLOGI MALAYSIA HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION SYSTEM CHIN TECK LOONG UNIVERSITI TEKNOLOGI MALAYSIA HARDWARE-ACCELERATED LOCALIZATION FOR AUTOMATED LICENSE PLATE RECOGNITION

More information

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Real Time Motion Detection Using Background Subtraction Method and Frame Difference Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today

More information

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Announcements. Written Assignment 2 out (due March 8) Computer Graphics Announcements Written Assignment 2 out (due March 8) 1 Advanced Ray Tracing (Recursive) Ray Tracing Antialiasing Motion Blur Distribution Ray Tracing Ray Tracing and Radiosity Assumptions Simple shading

More information

RECOGNITION OF PARTIALLY OCCLUDED OBJECTS IN 2D IMAGES ALMUASHI MOHAMMED ALI UNIVERSITI TEKNOLOGI MALAYSIA

RECOGNITION OF PARTIALLY OCCLUDED OBJECTS IN 2D IMAGES ALMUASHI MOHAMMED ALI UNIVERSITI TEKNOLOGI MALAYSIA RECOGNITION OF PARTIALLY OCCLUDED OBJECTS IN 2D IMAGES ALMUASHI MOHAMMED ALI UNIVERSITI TEKNOLOGI MALAYSIA i RECOGNITION OF PARTIALLY OCCLUDED OBJECT IN 2D IMAGES ALMUASHI MOHAMMED ALI A dissertation submitted

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF X-RAY SYSTEM FARHANK SABER BRAIM UNIVERSITI TEKNOLOGI MALAYSIA

SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF X-RAY SYSTEM FARHANK SABER BRAIM UNIVERSITI TEKNOLOGI MALAYSIA SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF X-RAY SYSTEM FARHANK SABER BRAIM UNIVERSITI TEKNOLOGI MALAYSIA SLANTING EDGE METHOD FOR MODULATION TRANSFER FUNCTION COMPUTATION OF

More information

A FAST METHOD OF FOG AND HAZE REMOVAL

A FAST METHOD OF FOG AND HAZE REMOVAL A FAST METHOD OF FOG AND HAZE REMOVAL Veeranjaneyulu Toka, Nandan Hosagrahara Sankaramurthy, Ravi Prasad Mohan Kini, Prasanna Kumar Avanigadda, Sibsambhu Kar Samsung R& D Institute India, Bangalore, India

More information

Time-of-flight basics

Time-of-flight basics Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Conceptual Physics 11 th Edition

Conceptual Physics 11 th Edition Conceptual Physics 11 th Edition Chapter 28: REFLECTION & REFRACTION This lecture will help you understand: Reflection Principle of Least Time Law of Reflection Refraction Cause of Refraction Dispersion

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES 1 R. AROKIA PRIYA, 2 POONAM GUJRATHI Assistant Professor, Department of Electronics and Telecommunication, D.Y.Patil College of Engineering, Akrudi,

More information

Gesture based PTZ camera control

Gesture based PTZ camera control Gesture based PTZ camera control Report submitted in May 2014 to the department of Computer Science and Engineering of National Institute of Technology Rourkela in partial fulfillment of the requirements

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG UNIVERSITI TEKNOLOGI MALAYSIA

BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG UNIVERSITI TEKNOLOGI MALAYSIA BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG UNIVERSITI TEKNOLOGI MALAYSIA BLOCK-BASED NEURAL NETWORK MAPPING ON GRAPHICS PROCESSOR UNIT ONG CHIN TONG A project report submitted

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

INTEGRATION OF CUBIC MOTION AND VEHICLE DYNAMIC FOR YAW TRAJECTORY MOHD FIRDAUS BIN MAT GHANI

INTEGRATION OF CUBIC MOTION AND VEHICLE DYNAMIC FOR YAW TRAJECTORY MOHD FIRDAUS BIN MAT GHANI INTEGRATION OF CUBIC MOTION AND VEHICLE DYNAMIC FOR YAW TRAJECTORY MOHD FIRDAUS BIN MAT GHANI A thesis submitted in fulfilment of the requirements for the award of the degree of Master ofengineering (Mechanical)

More information

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Shamir Alavi Electrical Engineering National Institute of Technology Silchar Silchar 788010 (Assam), India alavi1223@hotmail.com

More information

Neuro-fuzzy admission control in mobile communications systems

Neuro-fuzzy admission control in mobile communications systems University of Wollongong Thesis Collections University of Wollongong Thesis Collection University of Wollongong Year 2005 Neuro-fuzzy admission control in mobile communications systems Raad Raad University

More information

(Equation 24.1: Index of refraction) We can make sense of what happens in Figure 24.1

(Equation 24.1: Index of refraction) We can make sense of what happens in Figure 24.1 24-1 Refraction To understand what happens when light passes from one medium to another, we again use a model that involves rays and wave fronts, as we did with reflection. Let s begin by creating a short

More information

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID EyeTech Particle Size Particle Shape Particle concentration Analyzer A new technology for measuring particle size in combination with particle shape and concentration. COMBINED LASERTECHNOLOGY & DIA Content

More information

Physics-based Fast Single Image Fog Removal

Physics-based Fast Single Image Fog Removal Physics-based Fast Single Image Fog Removal Jing Yu 1, Chuangbai Xiao 2, Dapeng Li 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China 2 College of Computer Science and

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 33 Lecture RANDALL D. KNIGHT Chapter 33 Wave Optics IN THIS CHAPTER, you will learn about and apply the wave model of light. Slide

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES Karl W. Ulmer and John P. Basart Center for Nondestructive Evaluation Department of Electrical and Computer Engineering Iowa State University

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

PRIVACY FRIENDLY DETECTION TECHNIQUE OF SYBIL ATTACK IN VEHICULAR AD HOC NETWORK (VANET) SEYED MOHAMMAD CHERAGHI

PRIVACY FRIENDLY DETECTION TECHNIQUE OF SYBIL ATTACK IN VEHICULAR AD HOC NETWORK (VANET) SEYED MOHAMMAD CHERAGHI i PRIVACY FRIENDLY DETECTION TECHNIQUE OF SYBIL ATTACK IN VEHICULAR AD HOC NETWORK (VANET) SEYED MOHAMMAD CHERAGHI A project report submitted in partial fulfillment of the Requirements for the award of

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

COLOUR IMAGE WATERMARKING USING DISCRETE COSINE TRANSFORM AND TWO-LEVEL SINGULAR VALUE DECOMPOSITION BOKAN OMAR ALI

COLOUR IMAGE WATERMARKING USING DISCRETE COSINE TRANSFORM AND TWO-LEVEL SINGULAR VALUE DECOMPOSITION BOKAN OMAR ALI COLOUR IMAGE WATERMARKING USING DISCRETE COSINE TRANSFORM AND TWO-LEVEL SINGULAR VALUE DECOMPOSITION BOKAN OMAR ALI A dissertation submitted in partial fulfillment of the requirements for the award of

More information

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics Announcements Assignment 4 will be out later today Problem Set 3 is due today or tomorrow by 9am in my mail box (4 th floor NSH) How are the machines working out? I have a meeting with Peter Lee and Bob

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Conceptual Physics Fundamentals

Conceptual Physics Fundamentals Conceptual Physics Fundamentals Chapter 14: PROPERTIES OF LIGHT This lecture will help you understand: Reflection Refraction Dispersion Total Internal Reflection Lenses Polarization Properties of Light

More information

CHAP: REFRACTION OF LIGHT AT PLANE SURFACES

CHAP: REFRACTION OF LIGHT AT PLANE SURFACES CHAP: REFRACTION OF LIGHT AT PLANE SURFACES Ex : 4A Q: 1 The change in the direction of the path of light, when it passes from one transparent medium to another transparent medium, is called refraction

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

THE COMPARISON OF IMAGE MANIFOLD METHOD AND VOLUME ESTIMATION METHOD IN CONSTRUCTING 3D BRAIN TUMOR IMAGE

THE COMPARISON OF IMAGE MANIFOLD METHOD AND VOLUME ESTIMATION METHOD IN CONSTRUCTING 3D BRAIN TUMOR IMAGE THE COMPARISON OF IMAGE MANIFOLD METHOD AND VOLUME ESTIMATION METHOD IN CONSTRUCTING 3D BRAIN TUMOR IMAGE SHAMSHIYATULBAQIYAH BINTI ABDUL WAHAB UNIVERSITI TEKNOLOGI MALAYSIA THE COMPARISON OF IMAGE MANIFOLD

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha

More information

HARDWARE/SOFTWARE SYSTEM-ON-CHIP CO-VERIFICATION PLATFORM BASED ON LOGIC-BASED ENVIRONMENT FOR APPLICATION PROGRAMMING INTERFACING TEO HONG YAP

HARDWARE/SOFTWARE SYSTEM-ON-CHIP CO-VERIFICATION PLATFORM BASED ON LOGIC-BASED ENVIRONMENT FOR APPLICATION PROGRAMMING INTERFACING TEO HONG YAP HARDWARE/SOFTWARE SYSTEM-ON-CHIP CO-VERIFICATION PLATFORM BASED ON LOGIC-BASED ENVIRONMENT FOR APPLICATION PROGRAMMING INTERFACING TEO HONG YAP A project report submitted in partial fulfilment of the requirements

More information

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY ABSTRACT V. Thulasika and A. Ramanan Department of Computer Science, Faculty of Science, University of Jaffna, Sri Lanka v.thula.sika@gmail.com, a.ramanan@jfn.ac.lk

More information

ADAPTIVE ONLINE FAULT DETECTION ON NETWORK-ON-CHIP BASED ON PACKET LOGGING MECHANISM LOO LING KIM UNIVERSITI TEKNOLOGI MALAYSIA

ADAPTIVE ONLINE FAULT DETECTION ON NETWORK-ON-CHIP BASED ON PACKET LOGGING MECHANISM LOO LING KIM UNIVERSITI TEKNOLOGI MALAYSIA ADAPTIVE ONLINE FAULT DETECTION ON NETWORK-ON-CHIP BASED ON PACKET LOGGING MECHANISM LOO LING KIM UNIVERSITI TEKNOLOGI MALAYSIA ADAPTIVE ONLINE FAULT DETECTION ON NETWORK-ON-CHIP BASED ON PACKET LOGGING

More information

MICRO-SEQUENCER BASED CONTROL UNIT DESIGN FOR A CENTRAL PROCESSING UNIT TAN CHANG HAI

MICRO-SEQUENCER BASED CONTROL UNIT DESIGN FOR A CENTRAL PROCESSING UNIT TAN CHANG HAI MICRO-SEQUENCER BASED CONTROL UNIT DESIGN FOR A CENTRAL PROCESSING UNIT TAN CHANG HAI A project report submitted in partial fulfillment of the requirement for the award of the degree of Master of Engineering

More information

Global Illumination The Game of Light Transport. Jian Huang

Global Illumination The Game of Light Transport. Jian Huang Global Illumination The Game of Light Transport Jian Huang Looking Back Ray-tracing and radiosity both computes global illumination Is there a more general methodology? It s a game of light transport.

More information

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Proceedings of the 3rd International Conference on Industrial Application Engineering 2015 A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Somchai Nuanprasert a,*, Sueki

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

SUPERVISED MACHINE LEARNING APPROACH FOR DETECTION OF MALICIOUS EXECUTABLES YAHYE ABUKAR AHMED

SUPERVISED MACHINE LEARNING APPROACH FOR DETECTION OF MALICIOUS EXECUTABLES YAHYE ABUKAR AHMED i SUPERVISED MACHINE LEARNING APPROACH FOR DETECTION OF MALICIOUS EXECUTABLES YAHYE ABUKAR AHMED A project submitted in partial fulfillment of the requirements for the award of the degree of Master of

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

IMPROVED IMAGE COMPRESSION SCHEME USING HYBRID OF DISCRETE FOURIER, WAVELETS AND COSINE TRANSFORMATION MOH DALI MOUSTAFA ALSAYYH

IMPROVED IMAGE COMPRESSION SCHEME USING HYBRID OF DISCRETE FOURIER, WAVELETS AND COSINE TRANSFORMATION MOH DALI MOUSTAFA ALSAYYH 4 IMPROVED IMAGE COMPRESSION SCHEME USING HYBRID OF DISCRETE FOURIER, WAVELETS AND COSINE TRANSFORMATION MOH DALI MOUSTAFA ALSAYYH A thesis submitted in fulfilment of the requirements for the award of

More information

PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves

PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves Print Your Name Print Your Partners' Names Instructions April 17, 2015 Before lab, read the

More information

3 Interactions of Light Waves

3 Interactions of Light Waves CHAPTER 22 3 Interactions of Light Waves SECTION The Nature of Light BEFORE YOU READ After you read this section, you should be able to answer these questions: How does reflection affect the way we see

More information

LOGICAL OPERATORS AND ITS APPLICATION IN DETERMINING VULNERABLE WEBSITES CAUSED BY SQL INJECTION AMONG UTM FACULTY WEBSITES NURUL FARIHA BINTI MOKHTER

LOGICAL OPERATORS AND ITS APPLICATION IN DETERMINING VULNERABLE WEBSITES CAUSED BY SQL INJECTION AMONG UTM FACULTY WEBSITES NURUL FARIHA BINTI MOKHTER LOGICAL OPERATORS AND ITS APPLICATION IN DETERMINING VULNERABLE WEBSITES CAUSED BY SQL INJECTION AMONG UTM FACULTY WEBSITES NURUL FARIHA BINTI MOKHTER UNIVERSITI TEKNOLOGI MALAYSIA i LOGICAL OPERATORS

More information

Detection and Removal of Rain from Video Using Predominant Direction of Gabor Filters

Detection and Removal of Rain from Video Using Predominant Direction of Gabor Filters Detection and Removal of Rain from Video Using Predominant Direction of Gabor Filters Gelareh Malekshahi Department of Electrical Engineering, Sahand University of Technology, Tabriz, Iran g_malekshahi@sut.ac.ir

More information

1. Introduction. Volume 6 Issue 5, May Licensed Under Creative Commons Attribution CC BY. Shahenaz I. Shaikh 1, B. S.

1. Introduction. Volume 6 Issue 5, May Licensed Under Creative Commons Attribution CC BY. Shahenaz I. Shaikh 1, B. S. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior and Pixel Minimum Channel Shahenaz I. Shaikh 1, B. S. Kapre 2 1 Department of Computer Science and Engineering, Mahatma Gandhi Mission

More information

Automatic Image De-Weathering Using Physical Model and Maximum Entropy

Automatic Image De-Weathering Using Physical Model and Maximum Entropy Automatic Image De-Weathering Using Physical Model and Maximum Entropy Xin Wang, Zhenmin TANG Dept. of Computer Science & Technology Nanjing Univ. of Science and Technology Nanjing, China E-mail: rongtian_helen@yahoo.com.cn

More information

IDENTIFYING OPTICAL TRAP

IDENTIFYING OPTICAL TRAP IDENTIFYING OPTICAL TRAP Yulwon Cho, Yuxin Zheng 12/16/2011 1. BACKGROUND AND MOTIVATION Optical trapping (also called optical tweezer) is widely used in studying a variety of biological systems in recent

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

SEMANTICS ORIENTED APPROACH FOR IMAGE RETRIEVAL IN LOW COMPLEX SCENES WANG HUI HUI

SEMANTICS ORIENTED APPROACH FOR IMAGE RETRIEVAL IN LOW COMPLEX SCENES WANG HUI HUI SEMANTICS ORIENTED APPROACH FOR IMAGE RETRIEVAL IN LOW COMPLEX SCENES WANG HUI HUI A thesis submitted in fulfilment of the requirements for the award of the degree of Doctor of Philosophy (Computer Science)

More information

Fundamentals of Photography presented by Keith Bauer.

Fundamentals of Photography presented by Keith Bauer. Fundamentals of Photography presented by Keith Bauer kcbauer@juno.com http://keithbauer.smugmug.com Homework Assignment Composition Class will be February 7, 2012 Please provide 2 images by next Tuesday,

More information

Measuring Light: Radiometry and Cameras

Measuring Light: Radiometry and Cameras Lecture 11: Measuring Light: Radiometry and Cameras Computer Graphics CMU 15-462/15-662, Fall 2015 Slides credit: a majority of these slides were created by Matt Pharr and Pat Hanrahan Simulating a pinhole

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Lecture 7 Notes: 07 / 11. Reflection and refraction

Lecture 7 Notes: 07 / 11. Reflection and refraction Lecture 7 Notes: 07 / 11 Reflection and refraction When an electromagnetic wave, such as light, encounters the surface of a medium, some of it is reflected off the surface, while some crosses the boundary

More information

Scaling and Power Spectra of Natural Images

Scaling and Power Spectra of Natural Images Scaling and Power Spectra of Natural Images R. P. Millane, S. Alzaidi and W. H. Hsiao Department of Electrical and Computer Engineering University of Canterbury Private Bag 4800, Christchurch, New Zealand

More information

HW Chapter 20 Q 2,3,4,5,6,10,13 P 1,2,3. Chapter 20. Classic and Modern Optics. Dr. Armen Kocharian

HW Chapter 20 Q 2,3,4,5,6,10,13 P 1,2,3. Chapter 20. Classic and Modern Optics. Dr. Armen Kocharian HW Chapter 20 Q 2,3,4,5,6,10,13 P 1,2,3 Chapter 20 Classic and Modern Optics Dr. Armen Kocharian Electromagnetic waves and matter: A Brief History of Light 1000 AD It was proposed that light consisted

More information

Detection of Moving Objects in Colour based and Graph s axis Change method

Detection of Moving Objects in Colour based and Graph s axis Change method Detection of Moving Objects in Colour based and Graph s axis Change method Gagandeep Kaur1 Student of Master of Technology, Department of Computer Engineering, YCOE, GuruKashi Campus, Punjabi university,

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

Intermediate Physics PHYS102

Intermediate Physics PHYS102 Intermediate Physics PHYS102 Dr Richard H. Cyburt Assistant Professor of Physics My office: 402c in the Science Building My phone: (304) 384-6006 My email: rcyburt@concord.edu My webpage: www.concord.edu/rcyburt

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

ISOGEOMETRIC ANALYSIS OF PLANE STRESS STRUCTURE CHUM ZHI XIAN

ISOGEOMETRIC ANALYSIS OF PLANE STRESS STRUCTURE CHUM ZHI XIAN ISOGEOMETRIC ANALYSIS OF PLANE STRESS STRUCTURE CHUM ZHI XIAN A project report submitted in partial fulfilment of the requirements for the award of the degree of Master of Engineering (Civil-Structure)

More information

Motion in 2D image sequences

Motion in 2D image sequences Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or activities Segmentation and understanding of video sequences

More information

Representing and Computing Polarized Light in a Ray Tracer

Representing and Computing Polarized Light in a Ray Tracer Representing and Computing Polarized Light in a Ray Tracer A Technical Report in STS 4600 Presented to the Faculty of the School of Engineering and Applied Science University of Virginia in Partial Fulfillment

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow

Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow www.ijarcet.org 1758 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Detecting and Tracking a Moving Object in a Dynamic Background using Color-Based Optical Flow

More information

DETECTION OF WORMHOLE ATTACK IN MOBILE AD-HOC NETWORKS MOJTABA GHANAATPISHEH SANAEI

DETECTION OF WORMHOLE ATTACK IN MOBILE AD-HOC NETWORKS MOJTABA GHANAATPISHEH SANAEI ii DETECTION OF WORMHOLE ATTACK IN MOBILE AD-HOC NETWORKS MOJTABA GHANAATPISHEH SANAEI A project report submitted in partial fulfillment of the requirements for the award of the degree of Master of Computer

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

PHY132 Introduction to Physics II Class 5 Outline:

PHY132 Introduction to Physics II Class 5 Outline: PHY132 Introduction to Physics II Class 5 Outline: Ch. 22, sections 22.1-22.4 (Note we are skipping sections 22.5 and 22.6 in this course) Light and Optics Double-Slit Interference The Diffraction Grating

More information

A LEVY FLIGHT PARTICLE SWARM OPTIMIZER FOR MACHINING PERFORMANCES OPTIMIZATION ANIS FARHAN BINTI KAMARUZAMAN UNIVERSITI TEKNOLOGI MALAYSIA

A LEVY FLIGHT PARTICLE SWARM OPTIMIZER FOR MACHINING PERFORMANCES OPTIMIZATION ANIS FARHAN BINTI KAMARUZAMAN UNIVERSITI TEKNOLOGI MALAYSIA A LEVY FLIGHT PARTICLE SWARM OPTIMIZER FOR MACHINING PERFORMANCES OPTIMIZATION ANIS FARHAN BINTI KAMARUZAMAN UNIVERSITI TEKNOLOGI MALAYSIA A LEVY FLIGHT PARTICLE SWARM OPTIMIZER FOR MACHINING PERFORMANCES

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

Particle Image Velocimetry Part - 1

Particle Image Velocimetry Part - 1 AerE 545X class notes #23 Particle Image Velocimetry Part - 1 Hui Hu Department of Aerospace Engineering, Iowa State University Ames, Iowa 50011, U.S.A Announcement Room 1058, Sweeney Hall for Lab#4 (LDV

More information

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

More information

Online Pattern Recognition in Multivariate Data Streams using Unsupervised Learning

Online Pattern Recognition in Multivariate Data Streams using Unsupervised Learning Online Pattern Recognition in Multivariate Data Streams using Unsupervised Learning Devina Desai ddevina1@csee.umbc.edu Tim Oates oates@csee.umbc.edu Vishal Shanbhag vshan1@csee.umbc.edu Machine Learning

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Evaluations of k-space Trajectories for Fast MR Imaging for project of the course EE591, Fall 2004

Evaluations of k-space Trajectories for Fast MR Imaging for project of the course EE591, Fall 2004 Evaluations of k-space Trajectories for Fast MR Imaging for project of the course EE591, Fall 24 1 Alec Chi-Wah Wong Department of Electrical Engineering University of Southern California 374 McClintock

More information