Robust Airlight Estimation for Haze Removal from a Single Image

Size: px
Start display at page:

Download "Robust Airlight Estimation for Haze Removal from a Single Image"

Transcription

1 Robust Airlight Estimation for Haze Removal from a Single Image Matteo Pedone and Janne Heikkilä Machine Vision Group, University of Oulu, Finland matped,jth@ee.oulu.fi Abstract Present methods for haze removal from a single image require the estimation of two physical quantities which, according to the commonly used atmospheric scattering model, are transmission and airlight. The visual quality of images de-hazed with such methods is highly dependent on the accuracy of estimation of the aforementioned quantities. In this paper we propose a new method for reliable airlight color estimation that could be used in digital cameras to automatically de-haze images by removing unrealistic color artifacts. The main idea of our method is based on novel statistics gathered from natural images regarding frequently occurring airlight colors. The statistics are used to introduce a minimization cost functional which has a closed form solution, and is easy to compute. We compare our approach with current methods present in literature, and show its superior robustness with both images with artificially added haze, and real hazy photos. 1. Introduction When capturing images of real-world environments, there are often many causes which can contribute to an impairment of visibility. In this paper we are mainly interested in problems related to degradations caused by unfavorable atmospheric conditions. The presence in the air of aerosols and water droplets decreases the visibility range due to multiple scattering of light [7]. Depending on the type and concentration of the particles, one can observe mist, haze or fog. One of the visible consequences of this phenomenon is that far objects are less discernible, and lack contrast because they appear to fade gradually to an approximately homogeneous color, which is frequently called airlight in the literature. A detailed description and mathematical models of this process are presented in [8, 9]. De-hazing methods try to recover the original (spectral) radiance for each pixel of an image. This is an ill-posed problem, and authors have proposed different techniques to estimate the required unknown variables. The variables involved are the transmission, which is a function of the relative distances of the Figure 1. Example of haze removal with different estimated airlight colors. (top) original image; (center) de-hazing result with airlight color obtained with Fattal s method (three-dimensional search) ; (bottom) result obtained with airlight color estimated with our method. Resulting images were tone-mapped to enhance visibility. scene points from the observer, and the airlight; both contribute to the quality of the final result. Most of the present methods are mainly targeted to improve the quality of the estimated transmission, while often computing rough estimates of the airlight color. A wrong airlight color can cause 90

2 a de-hazed image to look like as if illuminated by an unrealistic light source (see Figure 1). The color artifacts present in the processed image give an unrealistic appearance to the objects in the scene that are far from the observer. Often such distortions cannot be considered tolerable in those applications where a device should produce a visually appealing de-hazed image to the end-user. Most methods start to manifest this drawback, as soon as the assumptions they make are violated. Such assumptions can be often too restrictive, depending on the application. In [8] Nayar et al. estimate the airlight color relying on the information conveyed by two images of the same scene, taken under different atmospheric conditions, which is not very practical. In [11] color constancy algorithms are used in order to circumvent the problem, however this idea implicitly assumes the airlight and the main illuminant coincide, which is a good approximation for scenes with a completely overcast sky, but is not true in general [7]. In [4] He et al. assume that an area with near zero transmission is present in the image, which is very restricting, as this assumption is frequently broken in aerial photos. In [2] Fattal makes the less restrictive assumption that there exist at least two small patches in the image whose pixels have approximately equal albedo, but contain areas where light is reflected differently; he then proposes two optimization schemes, one consists in finding a RGB triplet that minimizes the squared correlation between the de-hazed image and the transmission, and is essentially a three-dimensional search with gradient descent; the other one is a one-dimensional minimization and uses specific geometric constraints. However he does not discuss any reliable method to find valid patches, moreover the stability of its approach is sensitive to the goodness of the data contained in the patches. As a result, current methods satisfactorily recover the airlight color only with certain categories of images. The main contribution of this paper is the development of a method that works reliably within a broader range of images and with less strict assumptions on their content. We present a robust solution that only requires information retrieved from a single image, and introduce a cost function to be minimized which has a closed form solution, and is based on novel statistics from natural images. Finally, we show that our method outperforms the other approaches found in literature, with both real hazy images, and artificially degraded images. 2. Attenuation Model and Geometric Constraints In [8, 9] it has been proposed an applicable model for the attenuation of real pixel radiances due to multiple scattering in the atmosphere. Given a three component RGB vector R(x) representing the spectral radiance of a certain point x Figure 2. Attenuated radiances for two pixels with same albedo and same transmission. in the scene, another vector A representing the global environmental illumination (the airlight), the resulting attenuated radiance is given by: J(x) =tr(x)+(1 t)a (1) where J(x) is the observed color value, the parameter t [0, 1] is the transmission and is an exponential decay of the relative distance from x to the observer. The parameters R, t and A are not known and must be recovered. In [8, 2] it has been pointed out that (1) suggests some important geometrical constraints. In particular, given two points x and x with arbitrary corresponding transmissions t and t, such that R(x) = Bl 1 and R(x ) = Bl 2 for some real scalars{ l 1 } l 2 and vector B, one has that J(x), J(x ) Span B, Â. In this context B, l i,  are respectively the surface albedo, the amount of reflected light, and the airlight color, which is represented by the normalized airlight vector with l 2 norm. From these observations it immediately follows that different albedos B i form, together with the airlight-vector, two-dimensional subspaces S i R such that  S i. The subspaces are obviously planes (Figure 2), so considering only two pairs of linearly independent color vectors {J(x), J(x )}, {J(y), J(y )} with same albedos, and computing the respective unit-vectors ˆn x, ˆn y normal to each vector pair, one could theoretically retrieve the airlight color  as follows:  = ˆn x ˆn y (2) However there are practical problems that must be addressed and they are all discussed in the following sections. In the next section we describe one method to extract suitable image patches with same albedo, to be used for estimating the airlight color.. Extracting the Image Patches As it has been pointed out in the previous section, given two pair of pixels with the same albedo but different amount of reflected light, it is possible to obtain the airlight color. At this point, one must provide reliable criteria to identify constant albedo patches which manifest a non-constant amount 91

3 of reflected light (e.g. shadows or highlights). As illustrated in Figure 2, (1) dictates that one cannot directly deduce the albedo merely from the observed color values of J, because they are influenced by the airlight color. At this initial stage one is forced to provide an approximate value for Â; we use the RGB color vector w = 1 (1, 1, 1). This assumption allows one to easily find an invariant to attenuation which has a convenient form. In fact, given two orthogonal unit vectors î, ĵ such that î ĵ = Â, and considering p(x) =Projî ĵ (J(x)) the orthogonal projection of J(x) onto the plane î ĵ, it is easy to verify from (1) that ĵ p(x) =k R(x), î î + k R(x), ĵ () for some real scalar k. By normalizing p(x) one sees that hue angles of J do not depend on either t nor  = w, hence they are (approximately) invariant to attenuation. However, note that hue values are useful only when the pixels saturation is not close to zero. Since the luminance channel of the image can be directly used for finding shadowed and highlighted features, one would simply need to find patches with uniform hue direction but non-uniform luminance. In this context we define the hue direction as the complex quantity e i2θ, where θ is the hue angle. We randomly select a maximum of 2 patches Ω i with size 9x9 pixels which pass the following test: Mean Variance Ω hue2 - < 0.1 Ω sat > 0.01 < 0.01 Ω lum > 0.2 > < 0.9 < 0.01 Ω dark > 0.2 < These thresholds have been chosen manually and they did not turn out to be critical for the quality of the final results. Note that Ω hue2 is a complex quantity encoding the double hue angle as described above, and also recall that the dark channel of an image J(x) is given by [4] as: { } J dark (x) = min c {r,g,b} min {J c(z)} (4) z Ω(x) where the subscripts denote the respective color channel of the image, and that the dark channel can be directly used to estimate the transmission [4]. The last threshold needs further explanation. In fact, in natural images there are high chances that pixels with similar color represent points of the scene located at similar or equal depth. This assumption is often used by stereo matching algorithms [5]. When this is the case, the pixels inside a selected patch are associated with the same value of transmission, which is desirable. However, one needs to avoid to extract patches from areas Figure. Hue and saturation of the airlight color of 107 real images. The dashed line at represent the direction of the eigenvector corresponding to the largest eigenvalue of the covariance matrix of the distribution. where the transmission is close to 0 or 1. The reason will become clear at the end of this section, and is related to the fact that computing the normal vector to the plane spanned by two vector that almost coincide, is unstable. Once the patches with same albedo are extracted, the pixel values of each of them are theoretically constrained to be coplanar, as discussed in Section 2, so an estimate of the associated normal vector n is given by: 2 arg min n, J(z) 2 + δ n,  est n z Ω(x) (5) such that n 2 =1 where Âest is an initial rough estimate of the airlight color and can be quickly obtained as described in [4] by selecting one pixel of highest intensity in the corresponding brightest area of the dark channel of the image, or alternatively by choosing the pixel with highest intensity in the image as in [8, 2]. The right-most term is used to penalize estimated normals which are not perpendicular enough to the initial airlight color estimate Âest. The quadratic functional in 5 can be easily minimized by standard methods based on singular value decomposition, for example. We will illustrate in Section 4.2 how the obtained values of ˆn for each patch can be used to compute the airlight color estimate according to (2). We always keep δ =. In the next section we show how the airlight color  can be estimated robustly, assuming N 2 patches are found. 4. Robust Airlight Color Estimation In this section, we discuss a novel strategy to obtain robust estimates of airlight colors. 92

4 11 7 Mean Angular Error Proposed Fattal() Fattal(1) He Gray World Angular Error Std. Dev β β Figure 4. Results for artificially added haze to the Middlebury database images. (left) Mean angular error for haze amounts corresponding to the scattering coefficient β; (right) Standard deviation of the angular errors collected Statistics for Airlight Colors We observed that extracting good patches and solving the minimization problems described in Section often yielded unrealistic results. This is due to the sensitivity of the method to the accuracy of the estimated normals in (5). We noticed such inaccuracies can result in an unrealistic estimated airlight color. We address this problem by enforcing the minimization with an additional constraint based on natural images. We collected 107 real photos from flickr.com portraying hazy scenes taken during both daylight and twilight, and manually extracted from each of them a 2x2 pixels patch of the sky; we then computed an average of the RGB values of all the pixels contained in it, and computed the resulting hue and saturation. Such values are potentially distorted by different white-balance settings in the cameras, on the other hand the standard illuminants theoretically suitable for those lighting conditions (e.g. D50, D55, D75,...) have chromaticity coordinates close to the srgb white-point [6], and approximately lie on the same hue line. A plot of the collected data is shown in Figure. We fitted a Gaussian distribution to the collected samples in the hue/saturation plane, obtaining a covariance matrix with the following eigenvalue/eigenvector pairs: (0.4982, ) and (0.8671, ). This gives support to the observation that the tonalities of the airlight colors in hazy conditions tend to scatter fairly closely around a hue angle of approximately , and that a high percentage of the samples falls into a narrow area with low saturation. Nonetheless, a further inspection of the plot in Figure suggests there is also a fair amount of highly saturated samples scattered far from the main direction; these occurrences are due to the rapid changes in hue and saturation manifested by the sky during a sunset [1] Closed Form Solution The statistics presented above suggest that, an inexpensive way of obtaining more robustness, is first to introduce a penalty function for estimates that are not reasonably close to the hue direction (Figure ), which in RGB space corresponds to a plane passing through the origin with normal ˆn sky =0.981r g b. On the other hand, it is reasonable to include an additional penalty for overly saturated estimates, though this makes sense mainly for daylight conditions. Since the airlight color is constrained to lie on the unit sphere of the RGB space, a convenient measure for saturation is the squared Euclidean distance between the estimate and the point w = 1 (1, 1, 1). We then propose the following objective function to be minimized: arg min  2 2 w c(n i ) Â, ni + λ Â, ˆnsky + γ  i such that  2 =1 (6) The parameters λ and γ control the multiple trade-off between closeness to the sky colors plane (represented by n sky ), closeness to the planes spanned by the pixel values in the N patches (the n i terms), and amount of saturation. Note that we also associate each estimated normal n with a certainty scalar given by: c(n) =exp ρ Ω(x) 1 n, J(z) 2 (7) z Ω(x) where ρ =10 4. The certainty scalars are useful to prevent the proposed thresholds to have a large impact in the final estimate. The objective function can be re-written in the 2 9

5 Figure 5. Example of failure with default parameters. (left) original image; (center) de-hazed with airlight color obtained with the default parameters; (right) result obtained by setting γ = form: min  T Q 2γw T   such that  T  =1, where Q = M T WM; the i-th row of M is the vector n T i, while the last one is nt sky, and W = diag(c(n 1 ),...,c(n N ),λ). This is a quadratic eigenvalue problem and it can be easily minimized by solving a sparse linear system as described by Gander et al. in []; first, the following polynomial eigenvalue problem P (Q, k)x =0 must be solved: (k 2 I 2k(QQ T )+(QQ T ) 2 4γ 2 ww T )x =0 (8) and once obtained the eigenvalue k, the airlight color is given by the solution of the linear system (QQ T ki)â =2γw (9) We use the default parameter values λ =, γ = Experiments We compared our proposed method with the approaches proposed by He et al. [4], the three-dimensional and the one-dimensional minimizations proposed by Fattal [2], and a color constancy method with gray-world assumption as used by Tan [11]. We tested our method with both artificially added haze, and real hazy photos. After airlight color estimation, the actual airlight radiance is obtained using the strategy discussed in [2], which consists in finding a radiance value that minimizes the squared correlation between the de-hazed image and the transmission in small uniform albedo patches; finally we remove the haze in all images using He s algorithm [4]. In the experiment with synthetic haze, we used 10 images from the Middlebury stereo images databases [10]; the images used in the experiment are identified in the Middlebury databases with the names Art, Books, Cones, Dolls, Laundry, Moebius, Midd1, Baby1, Monopoly, Reindeer, and were resized to pixels. The respective depthmaps provided in the database were used to produce hazy versions of the images according (1); the values for A were set with the airlight colors  gt we manually collected, as described in Section 4.1, while the transmission for a given pixel was set to t = e β(1 d) where d is the value of the depth-map, and β the scattering coefficient that controls the amount of visible haze. Some of the degraded images are shown in Figure 6. The accuracy of the recovered airlight color  was measured by the angular error cos Â, 1 Âgt. We performed 1070 comparisons, which were given by the 10 images degraded with all the 107 haze colors we collected. Results are reported in Figure 4. In the described experiment the proposed method outperforms the other ones in terms of angular error mean and standard deviation for most haze levels. For higher amount of haze (β >8) the simple gray world method yielded more accurate results. This is partly explained by the fact that when haze becomes very dense, the scene is expected to be dominated by the spectral radiance of the airlight. However all the images of the Middlebury database depict indoor scenes and most of them present a considerably large amount of pixels with neutral colors (mostly walls, papers, books, white cardboard etc.); we believe this fact could have possibly led to an overestimation of the performance of the gray-world algorithm, which we found often unsuitable for real hazy images. The results for real hazy photos are shown in Figure 7. Current methods in literature often yield an unrealistic color cast in the de-hazed areas which is not present with our proposed approach, see Figures 1 and 7. It is fairly evident in the left-most image of Figure 7, that a visible color cast is present in almost all the de-hazed results. Instead our method managed to recover an accurate airlight color and to produce an image which is free of unrealistic color artifacts. Also in the results for the second image of Figure 7 it is possible to observe color artifacts in the upper area, and we believe our methods yielded more neutral results. Moreover, although He s method was able to produce comparable results to our algorithm for the two right-most images, it failed in recovering a sufficiently accurate airlight color in the two left-most tests. As there is no general criterion to choose an appropriate method for a given arbitrary image, relying on a more stable method is often a preferred choice. The superior stability of our method is also confirmed by 94

6 the lower standard deviation of the angular error in the experiment with artificial haze. We noticed in few circumstances, when the haze color saturation was high, that the default values for the parameters in (6) did not yield good results, and we had to manually set the saturation penalty to γ = in order to obtain visually acceptable images (Figure 5). Nonetheless we found this situation occur mainly in photos taken during sunset, in which airlight colors are typically characterized by a relatively high level of saturation. All the experiments were run with a Matlab implementation of the proposed algorithm. The largest image size used in our experiments was pixel, and for images of this size, it took 1.44 seconds to get the airlight estimate on a 2.66Ghz processor with 1Gb of RAM. We also tried to downsample the large images to a size of pixels, achieving a computational speed of 0.65 seconds, with no loss of quality in the de-hazed image. Once obtained the airlight estimate, de-hazing a image with [4] took approximately 5.5 seconds; However 4.95 seconds of running time were due to the soft-matting algorithm to refine the estimated transmission map. Real-time soft-matting algorithms using the GPU are beginning to appear in the literature. 6. Conclusion In this paper we considered the problem of estimation of the airlight in haze removal algorithms. We first showed how wrong values for the airlight can cause an unrealistic color cast in a de-hazed image, and then discussed the limitations of present methods found in literature. Novel statistics for occurring airlight colors in hazy conditions were extracted from real images. Such statistics were then used to design a new robust solution for computing the color hue of the airlight. We performed experiments, with both artificially added haze, where the ground-truth haze colors were available, and then tested the proposed algorithm with real images. In both cases, we showed our method outperforms the other approaches present in literature, and produce satisfactory results also in difficult scenarios. While other methods might still be able to produce equally satisfactory results for some images, ours proved to be behave consistently with a greater number of different photos. References [1] E. Bruneton and F. Neyret. Precomputed atmospheric scattering. Proceedings of the 19th Eurographics Symposium on Rendering, [2] R. Fattal. Single image dehazing. ACM Transactions on Graphics. Proc. ACM SIGGRAPH, , 92, 94 [] W. Gander, G. H. Golub, and U. von Matt. A constrained eigenvalue problem. Linear Algebra and its Applications, :815 89, Figure 6. Some images from the Middlebury set degraded with artificial haze and scattering factor β ranging from 1 to 9. [4] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE Conference on Computer Vision and Pattern Recognition CVPR, , 92, 94, 95 [5] H. Hirschmller and D. Scharstein. Evaluation of cost functions for stereo matching. IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR, [6] HunterLab. Equivalent white light sources, and CIE illuminants. an05_05.pdf. 9 [7] M. Minnaert. The Nature of Light and Color in the Open Air. Dover: New York, , 91 [8] S. G. Narasimhan and S. K. Nayar. Vision and the atmosphere. IJCV, 12(1):24 778, , 91, 92 [9] S. K. Nayar and S. G. Narasimhan. Vision in bad weather. The Proceedings of the Seventh IEEE International Conference on Computer Vision, pages , , 91 [10] D. Scharstein and R. Szeliski. High-accuracy stereo depth maps using structured light. IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR, [11] R. Tan. Visibility in bad weather from a single image. proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, , 94 95

7 Figure 7. Results with real images. (first row) original hazy image; (second row) de-hazed with airlight color obtained by a color constancy algorithm with gray-world assumption; (third row) with Fattal s three-dimensional minimization; (fourth row) with Fattal s onedimensional minimization; (fifth row) with He s method; (last row) with proposed method. 96

1. Introduction. Volume 6 Issue 5, May Licensed Under Creative Commons Attribution CC BY. Shahenaz I. Shaikh 1, B. S.

1. Introduction. Volume 6 Issue 5, May Licensed Under Creative Commons Attribution CC BY. Shahenaz I. Shaikh 1, B. S. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior and Pixel Minimum Channel Shahenaz I. Shaikh 1, B. S. Kapre 2 1 Department of Computer Science and Engineering, Mahatma Gandhi Mission

More information

Physics-based Fast Single Image Fog Removal

Physics-based Fast Single Image Fog Removal Physics-based Fast Single Image Fog Removal Jing Yu 1, Chuangbai Xiao 2, Dapeng Li 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China 2 College of Computer Science and

More information

A FAST METHOD OF FOG AND HAZE REMOVAL

A FAST METHOD OF FOG AND HAZE REMOVAL A FAST METHOD OF FOG AND HAZE REMOVAL Veeranjaneyulu Toka, Nandan Hosagrahara Sankaramurthy, Ravi Prasad Mohan Kini, Prasanna Kumar Avanigadda, Sibsambhu Kar Samsung R& D Institute India, Bangalore, India

More information

Single Image Dehazing with Varying Atmospheric Light Intensity

Single Image Dehazing with Varying Atmospheric Light Intensity Single Image Dehazing with Varying Atmospheric Light Intensity Sanchayan Santra Supervisor: Prof. Bhabatosh Chanda Electronics and Communication Sciences Unit Indian Statistical Institute 203, B.T. Road

More information

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY

SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY ABSTRACT V. Thulasika and A. Ramanan Department of Computer Science, Faculty of Science, University of Jaffna, Sri Lanka v.thula.sika@gmail.com, a.ramanan@jfn.ac.lk

More information

Contrast restoration of road images taken in foggy weather

Contrast restoration of road images taken in foggy weather Contrast restoration of road images taken in foggy weather Houssam Halmaoui Aurélien Cord UniverSud, LIVIC, Ifsttar 78000 Versailles Houssam.Halmaoui@ifsttar.fr Aurélien.Cord@ifsttar.fr Nicolas Hautière

More information

Day/Night Unconstrained Image Dehazing

Day/Night Unconstrained Image Dehazing Day/Night Unconstrained Image Dehazing Sanchayan Santra, Bhabatosh Chanda Electronics and Communication Sciences Unit Indian Statistical Institute Kolkata, India Email: {sanchayan r, chanda}@isical.ac.in

More information

Single image dehazing in inhomogeneous atmosphere

Single image dehazing in inhomogeneous atmosphere Single image dehazing in inhomogeneous atmosphere Zhenwei Shi a,, Jiao Long a, Wei Tang a, Changshui Zhang b a Image Processing Center, School of Astronautics, Beihang University, Beijing, China b Department

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion

Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion Institute of Oceanographic Instrumentation, Shandong Academy of Sciences Qingdao, 266061, China E-mail:gyygyy1234@163.com Zhigang

More information

Fog Simulation and Refocusing from Stereo Images

Fog Simulation and Refocusing from Stereo Images Fog Simulation and Refocusing from Stereo Images Yifei Wang epartment of Electrical Engineering Stanford University yfeiwang@stanford.edu bstract In this project, we use stereo images to estimate depth

More information

HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS

HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS CHENG-HSIUNG HSIEH, YU-SHENG LIN, CHIH-HUI CHANG Department of Computer Science and Information Engineering Chaoyang University

More information

Air-Light Estimation Using Haze-Lines

Air-Light Estimation Using Haze-Lines Air-Light Estimation Using Haze-Lines Dana Berman Tel Aviv University danamena@post.tau.ac.il Tali Treibitz University of Haifa ttreibitz@univ.haifa.ac.il Shai Avidan Tel Aviv University avidan@eng.tau.ac.il

More information

Efficient Image Dehazing with Boundary Constraint and Contextual Regularization

Efficient Image Dehazing with Boundary Constraint and Contextual Regularization 013 IEEE International Conference on Computer Vision Efficient Image Dehazing with Boundary Constraint and Contextual Regularization Gaofeng MENG, Ying WANG, Jiangyong DUAN, Shiming XIANG, Chunhong PAN

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

A Fast Semi-Inverse Approach to Detect and Remove the Haze from a Single Image

A Fast Semi-Inverse Approach to Detect and Remove the Haze from a Single Image A Fast Semi-Inverse Approach to Detect and Remove the Haze from a Single Image Codruta O. Ancuti, Cosmin Ancuti, Chris Hermans, Philippe Bekaert Hasselt University - tul -IBBT, Expertise Center for Digital

More information

Factorizing Scene Albedo and Depth from a Single Foggy Image

Factorizing Scene Albedo and Depth from a Single Foggy Image Factorizing Scene Albedo and Depth from a Single Foggy Image Louis Kratz Ko Nishino Department of Computer Science, Drexel University Philadelphia, PA {lak24, kon}@drexel.edu Abstract Atmospheric conditions

More information

Automatic Recovery of the Atmospheric Light in Hazy Images

Automatic Recovery of the Atmospheric Light in Hazy Images Automatic Recovery of the Atmospheric Light in Hazy Images Matan Sulami Itamar Glatzer Raanan Fattal brew University of Jerusalem Mike Werman Abstract Most image dehazing algorithms require, for their

More information

Real-time Video Dehazing based on Spatio-temporal MRF. Bolun Cai, Xiangmin Xu, and Dacheng Tao

Real-time Video Dehazing based on Spatio-temporal MRF. Bolun Cai, Xiangmin Xu, and Dacheng Tao Real-time Video Dehazing based on Spatio-temporal MRF Bolun Cai, Xiangmin Xu, and Dacheng Tao 1 Contents Introduction ST-MRF Experiments 2 Haze & Video Dehazing Introduction 1 3 What is Haze? The sky of

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Applications of Light Polarization in Vision

Applications of Light Polarization in Vision Applications of Light Polarization in Vision Lecture #18 Thanks to Yoav Schechner et al, Nayar et al, Larry Wolff, Ikeuchi et al Separating Reflected and Transmitted Scenes Michael Oprescu, www.photo.net

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

International Journal of Advance Engineering and Research Development. An Improved algorithm for Low Contrast Hazy Image Detection using DCP

International Journal of Advance Engineering and Research Development. An Improved algorithm for Low Contrast Hazy Image Detection using DCP Scientific Journal of Impact Factor(SJIF): 3.134 e-issn(o): 2348-4470 p-issn(p): 2348-6406 International Journal of Advance Engineering and Research Development Volume 02,Issue 05, May - 2015 An Improved

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Estimating basis functions for spectral sensitivity of digital cameras

Estimating basis functions for spectral sensitivity of digital cameras (MIRU2009) 2009 7 Estimating basis functions for spectral sensitivity of digital cameras Abstract Hongxun ZHAO, Rei KAWAKAMI, Robby T.TAN, and Katsushi IKEUCHI Institute of Industrial Science, The University

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

COMPARATIVE STUDY OF VARIOUS DEHAZING APPROACHES, LOCAL FEATURE DETECTORS AND DESCRIPTORS

COMPARATIVE STUDY OF VARIOUS DEHAZING APPROACHES, LOCAL FEATURE DETECTORS AND DESCRIPTORS COMPARATIVE STUDY OF VARIOUS DEHAZING APPROACHES, LOCAL FEATURE DETECTORS AND DESCRIPTORS AFTHAB BAIK K.A1, BEENA M.V2 1. Department of Computer Science & Engineering, Vidya Academy of Science & Technology,

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Real-Time Detection of Small Surface Objects Using Weather Effects

Real-Time Detection of Small Surface Objects Using Weather Effects Real-Time Detection of Small Surface Objects Using Weather Effects Baojun Qi, Tao Wu, Hangen He, and Tingbo Hu Institute of Automation, College of Mechatronics Engineering and Automation, National University

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

CONTRAST ENHANCEMENT ALGORITHMS FOR FOGGY IMAGES

CONTRAST ENHANCEMENT ALGORITHMS FOR FOGGY IMAGES International Journal of Electrical and Electronics Engineering Research (IJEEER) ISSN(P): 2250-155X; ISSN(E): 2278-943X Vol. 6, Issue 5, Oct 2016, 7-16 TJPRC Pvt. Ltd CONTRAST ENHANCEMENT ALGORITHMS FOR

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

High Information Rate and Efficient Color Barcode Decoding

High Information Rate and Efficient Color Barcode Decoding High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu

More information

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template

Module 5: Video Modeling Lecture 28: Illumination model. The Lecture Contains: Diffuse and Specular Reflection. Objectives_template The Lecture Contains: Diffuse and Specular Reflection file:///d /...0(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2028/28_1.htm[12/30/2015 4:22:29 PM] Diffuse and

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Rendering and Modeling of Transparent Objects. Minglun Gong Dept. of CS, Memorial Univ.

Rendering and Modeling of Transparent Objects. Minglun Gong Dept. of CS, Memorial Univ. Rendering and Modeling of Transparent Objects Minglun Gong Dept. of CS, Memorial Univ. Capture transparent object appearance Using frequency based environmental matting Reduce number of input images needed

More information

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology

Corner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical

More information

A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection

A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection Bin Gao 2, Tie-Yan Liu 1, Qian-Sheng Cheng 2, and Wei-Ying Ma 1 1 Microsoft Research Asia, No.49 Zhichun

More information

Vision and the Atmosphere

Vision and the Atmosphere International Journal of Computer Vision 48(3), 233 254, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Vision and the Atmosphere SRINIVASA G. NARASIMHAN AND SHREE K. NAYAR Computer

More information

FOG and haze are two of the most common real world phenomena

FOG and haze are two of the most common real world phenomena 1 Haze Visibility Enhancement: A Survey and Quantitative Benchmarking Yu Li, Shaodi You, Michael S. Brown, and Robby T. Tan arxiv:1607.06235v1 [cs.cv] 21 Jul 2016 Abstract This paper provides a comprehensive

More information

Shadow detection and removal from a single image

Shadow detection and removal from a single image Shadow detection and removal from a single image Corina BLAJOVICI, Babes-Bolyai University, Romania Peter Jozsef KISS, University of Pannonia, Hungary Zoltan BONUS, Obuda University, Hungary Laszlo VARGA,

More information

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING Hyunjung Shim Tsuhan Chen {hjs,tsuhan}@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University

More information

Supplementary Material: Specular Highlight Removal in Facial Images

Supplementary Material: Specular Highlight Removal in Facial Images Supplementary Material: Specular Highlight Removal in Facial Images Chen Li 1 Stephen Lin 2 Kun Zhou 1 Katsushi Ikeuchi 2 1 State Key Lab of CAD&CG, Zhejiang University 2 Microsoft Research 1. Computation

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon [Paper Seminar 7] CVPR2003, Vol.1, pp.195-202 High-Accuracy Stereo Depth Maps Using Structured Light Daniel Scharstein Middlebury College Richard Szeliski Microsoft Research 2012. 05. 30. Yeojin Yoon Introduction

More information

A Review on Different Image Dehazing Methods

A Review on Different Image Dehazing Methods A Review on Different Image Dehazing Methods Ruchika Sharma 1, Dr. Vinay Chopra 2 1 Department of Computer Science & Engineering, DAV Institute of Engineering & Technology Jalandhar, India 2 Department

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

CONSTRAIN PROPAGATION FOR GHOST REMOVAL IN HIGH DYNAMIC RANGE IMAGES

CONSTRAIN PROPAGATION FOR GHOST REMOVAL IN HIGH DYNAMIC RANGE IMAGES CONSTRAIN PROPAGATION FOR GHOST REMOVAL IN HIGH DYNAMIC RANGE IMAGES Matteo Pedone, Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu, Finland

More information

Blur and Contrast Invariant Fast Stereo Matching

Blur and Contrast Invariant Fast Stereo Matching Blur and Contrast Invariant Fast Stereo Matching Matteo Pedone and Janne Heikkilä Department of Electrical and Information Engineering, University of Oulu, Finland matped@ee.oulu.fi,jth@ee.oulu.fi Abstract.

More information

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

Principal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many

Principal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS ON EIGENFACES 2D AND 3D MODEL 3.1 INTRODUCTION Principal Component Analysis (PCA) is a most practicable statistical technique. Its application plays a major role

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS. Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman

SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS. Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman Department of Electrical and Computer Engineering, University of California,

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

Transmission Estimation in Underwater Single Images

Transmission Estimation in Underwater Single Images 2013 IEEE International Conference on Computer Vision Workshops Transmission Estimation in Underwater Single Images P. Drews-Jr 1,2, E. do Nascimento 2, F. Moraes 1, S. Botelho 1, M. Campos 2 1 C3 - Univ.

More information

Automatic Image De-Weathering Using Physical Model and Maximum Entropy

Automatic Image De-Weathering Using Physical Model and Maximum Entropy Automatic Image De-Weathering Using Physical Model and Maximum Entropy Xin Wang, Zhenmin TANG Dept. of Computer Science & Technology Nanjing Univ. of Science and Technology Nanjing, China E-mail: rongtian_helen@yahoo.com.cn

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

More information

Lighting and Shading Computer Graphics I Lecture 7. Light Sources Phong Illumination Model Normal Vectors [Angel, Ch

Lighting and Shading Computer Graphics I Lecture 7. Light Sources Phong Illumination Model Normal Vectors [Angel, Ch 15-462 Computer Graphics I Lecture 7 Lighting and Shading February 12, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Light Sources Phong Illumination Model

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Spectral Images and the Retinex Model

Spectral Images and the Retinex Model Spectral Images and the Retine Model Anahit Pogosova 1, Tuija Jetsu 1, Ville Heikkinen 2, Markku Hauta-Kasari 1, Timo Jääskeläinen 2 and Jussi Parkkinen 1 1 Department of Computer Science and Statistics,

More information

On the distribution of colors in natural images

On the distribution of colors in natural images On the distribution of colors in natural images A. Buades, J.L Lisani and J.M. Morel 1 Introduction When analyzing the RGB distribution of colors in natural images we notice that they are organized into

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Lecture 1 Image Formation.

Lecture 1 Image Formation. Lecture 1 Image Formation peimt@bit.edu.cn 1 Part 3 Color 2 Color v The light coming out of sources or reflected from surfaces has more or less energy at different wavelengths v The visual system responds

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

Supplementary Material for ECCV 2012 Paper: Extracting 3D Scene-consistent Object Proposals and Depth from Stereo Images

Supplementary Material for ECCV 2012 Paper: Extracting 3D Scene-consistent Object Proposals and Depth from Stereo Images Supplementary Material for ECCV 2012 Paper: Extracting 3D Scene-consistent Object Proposals and Depth from Stereo Images Michael Bleyer 1, Christoph Rhemann 1,2, and Carsten Rother 2 1 Vienna University

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Fog Detection System Based on Computer Vision Techniques

Fog Detection System Based on Computer Vision Techniques Fog Detection System Based on Computer Vision Techniques S. Bronte, L. M. Bergasa, P. F. Alcantarilla Department of Electronics University of Alcalá Alcalá de Henares, Spain sebastian.bronte, bergasa,

More information

A Survey of Modelling and Rendering of the Earth s Atmosphere

A Survey of Modelling and Rendering of the Earth s Atmosphere Spring Conference on Computer Graphics 00 A Survey of Modelling and Rendering of the Earth s Atmosphere Jaroslav Sloup Department of Computer Science and Engineering Czech Technical University in Prague

More information

Robust Image Dehazing and Matching Based on Koschmieder s Law And SIFT Descriptor

Robust Image Dehazing and Matching Based on Koschmieder s Law And SIFT Descriptor Robust Image Dehazing and Matching Based on Koschmieder s Law And SIFT Descriptor 1 Afthab Baik K.A, 2 Beena M.V 1 PG Scholar, 2 Asst. Professor 1 Department of CSE 1 Vidya Academy of Science And Technology,

More information

The Defogging Algorithm for the Vehicle Video Image Based on Extinction Coefficient

The Defogging Algorithm for the Vehicle Video Image Based on Extinction Coefficient Sensors & Transducers 214 by IFSA Publishing, S. L. http://www.sensorsportal.com The Defogging Algorithm for the Vehicle Video Image Based on Extinction Coefficient Li Yan-Yan, * LUO Yu, LOG Wei, Yang

More information

Single Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization

Single Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization Single Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization Shengdong Zhang and Jian Yao Computer Vision and Remote Sensing (CVRS) Lab School of Remote Sensing and Information Engineering,

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Fog and Cloud Effects. Karl Smeltzer Alice Cao John Comstock

Fog and Cloud Effects. Karl Smeltzer Alice Cao John Comstock Fog and Cloud Effects Karl Smeltzer Alice Cao John Comstock Goal Explore methods of rendering scenes containing fog or cloud-like effects through a variety of different techniques Atmospheric effects make

More information

HAZE is a traditional atmospheric phenomenon where

HAZE is a traditional atmospheric phenomenon where DehazeNet: An End-to-End System for Single Image Haze Removal Bolun Cai, Xiangmin Xu, Member, IEEE, Kui Jia, Member, IEEE, Chunmei Qing, Member, IEEE, and Dacheng Tao, Fellow, IEEE 1 arxiv:1601.07661v2

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40 Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40 Note 1: Both the analytical problems and the programming assignments are due at the beginning of class on Nov 15,

More information

Analysis of photometric factors based on photometric linearization

Analysis of photometric factors based on photometric linearization 3326 J. Opt. Soc. Am. A/ Vol. 24, No. 10/ October 2007 Mukaigawa et al. Analysis of photometric factors based on photometric linearization Yasuhiro Mukaigawa, 1, * Yasunori Ishii, 2 and Takeshi Shakunaga

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

3D Reconstruction Of Occluded Objects From Multiple Views

3D Reconstruction Of Occluded Objects From Multiple Views 3D Reconstruction Of Occluded Objects From Multiple Views Cong Qiaoben Stanford University Dai Shen Stanford University Kaidi Yan Stanford University Chenye Zhu Stanford University Abstract In this paper

More information

Tracking in image sequences

Tracking in image sequences CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Tracking in image sequences Lecture notes for the course Computer Vision Methods Tomáš Svoboda svobodat@fel.cvut.cz March 23, 2011 Lecture notes

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Face View Synthesis Across Large Angles

Face View Synthesis Across Large Angles Face View Synthesis Across Large Angles Jiang Ni and Henry Schneiderman Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 1513, USA Abstract. Pose variations, especially large out-of-plane

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note

6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note 6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note Alexander Krull, Frank Michel, Eric Brachmann, Stefan Gumhold, Stephan Ihrke, Carsten Rother TU Dresden, Dresden, Germany The

More information