A Novel Approach for Shadow Removal Based on Intensity Surface Approximation

Size: px
Start display at page:

Download "A Novel Approach for Shadow Removal Based on Intensity Surface Approximation"

Transcription

1 A Novel Approach for Shadow Removal Based on Intensity Surface Approximation Eli Arbel THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER DEGREE University of Haifa Faculty of Social Sciences Department of Computer Science March, 2009

2 A Novel Approach for Shadow Removal Based on Intensity Surface Approximation By: Eli Arbel Supervised By: Dr. Hagit Hel-Or THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER DEGREE University of Haifa Faculty of Social Sciences Department of Computer Science March, 2009 Approved by: (Supervisor) Date: Approved by: (Chairperson of M.A Committee) Date: I

3 Contents List Of Figures Abstract V VII 1 Introduction Why shadow removal? Thesis contribution Thesis outline Problems and Challenges in Shadow Removal Physical Phenomena Shadow Intensity Umbra and Penumbra The Light Source Scene Characteristics Self Shadows and Shading Complexity of the Shadowed Surface Geometry of the Shadowed Surface Intersection of Shadow and Reflectance Boundaries Image Acquisition and Processing Capturing Device Post-processing II

4 3 Background Related work Automatic Shadow detection methods Gradient-based shadow removal methods Intensity-based shadow removal methods Markov Random Fields Histogram Specification Thin-plate Models Image Formulation Approach and Methods for Shadow Removal Removing uniform shadows Introduction Limitations of Existing Approaches Calculating uniform-shadow scale factor An Efficient Algorithm for Scale Factor Derivation Removing Non-uniform Shadows Introduction Intensity Surface Approximation Determining Smooth Scale Factors in the Umbra Determining Penumbra Scale Factors Shadow-free Region Enhancement 46 6 Umbra, Penumbra and Surround masks derivation User-guided shadow masks derivation Labeling penumbra pixels Experimental Results Uniform Shadow results Non-uniform Shadow results Enhancement results and Comparison III

5 8 Conclusion and Future Work 66 Bibliography 68 A Implementation details of our shadow removal algorithm 74 A.1 Flow diagram of the entire approach A.2 Penumbra labeling algorithm IV

6 List of Figures 2.1 Examples of uniform and non-uniform shadows Umbra and penumbra of a shadow Varying width of shadow penumbra Examples of different penumbra profiles Examples of problems in shadow removal Shadow penumbra reconstruction Example of problems with gradient domain shadow removal from curved surface Example of intensity clipping in shadow pixels The effect of image processing on shadow-free regions Intensity surfaces Illustration of classical gradient-based shadow removal compared to our approach Umbra and penumbra masks Finding shadow scale factor using splines Reconstructing the umbra Examples of removing non-uniform shadows with global (uniform) scale factors Masks of a shadow image An example of an approximated shadow-free intensity surface 39 V

7 4.8 Shadow removal result when smoothing the scale factors image naively Calculating smooth umbra scale factors using anchor pixels Calculating penumbra scale factors Shadow removal from an image containing noise in the shadow region An example for performing shadow-free region enhancement without preserving local mean values Examples of user-guided extraction of shadow masks based on region-growing and SVM The effect of incorrect labeling of penumbra pixels Penumbra pixels labeling using MRF Experimental results of the method described in Section Shadow removal using the method suggested in Section Shadow removal from textured surface Removing non-uniform shadow cast on flat surface Complex non-uniform shadow with soft shadow regions Removing non-uniform shadow cast on curved surface Removing non-uniform shadow cast on curved surface Highlight removal example Shadow-free region enhancement example Shadow-free region enhancement example Comparison of the proposed method with the method of [52] Comparison of the proposed method with the method in [35]. 65 A.1 Flow diagram of our algorithm VI

8 A Novel Approach for Shadow Removal Based on Intensity Surface Approximation Eli Arbel ABSTRACT Removal of shadows from a single image is a challenging problem. Producing a high-quality shadow-free image which is indistinguishable from a reproduction of a true shadow-free scene, is even more difficult. Shadows in images are typically affected by several phenomena in the scene, such as lighting conditions, type and behavior of shadowed surfaces, occluding objects etc. Additionally, shadow regions may undergo post-acquisition image processing transformations, e.g., contrast enhancement, which may introduce noticeable artifacts in the shadow-free images. Several approaches to shadow removal in color images have been introduced in recent years. We argue that the assumptions introduced in most studies, arise from the complexity of the problem of shadow removal from a single image and limit the class of shadow images that can be handled by these methods. Previously proposed methods fail in handling some of the fundamental issues in shadow removal: removing non-uniform shadows and handling image processing transformations which can significantly affect the final shadow-free result. The purpose of this work is twofold: first, it provides a comprehensive survey of the problems and challenges which may occur when removing shadows from a single image. Second, a novel approach for removing shadows from a single color image is presented. Each color channel of the image is considered as an intensity surface. Shadow intensities can then be determined by approximating the shape of the intensity surface. A thin-plate model is VII

9 used to approximate the surface. When removing uniform shadows, cubic smoothing splines (i.e. 1D thin-plate) are used in order to estimate the correct scale factor of the shadow in umbra regions. Removal of non-uniform shadows is carried out using 2D thin-plate approximation of the intensity surface in umbra regions. This model enables capturing and approximating the shape of flat as well as curved intensity surfaces, both in the case of uniform and non-uniform shadows. Therefore, the proposed approach is robust in removing shadows cast on flat as well as curved surfaces. Adoption of this approximation approach is also advantageous in handling wide penumbra regions, a traditional weak-spot of many existing shadow removal algorithms, while retaining the textural content of the image without requiring explicit modeling of shadow penumbra. In addition, we propose a shadow-free region enhancement method, which can effectively handle post-acquisition image processing transformations that may affect the final shadow-free result, thus increasing the robustness of the proposed shadow removal algorithm. A variety of experimental results is given in this work, demonstrating the capabilities and robustness of the proposed approach. VIII

10 Chapter 1 Introduction 1.1 Why shadow removal? Shadows are an integral part of many natural images. While shadows, and in particular cast shadows, can provide valuable information on an acquired scene, e.g., cues for spatial layout and surface geometry [34], they can also pose difficult problems and limitations for various computer vision algorithms. Segmentation algorithms can be significantly affected by the presence of shadows in images, as abrupt change in color may introduce spurious segments on coherent surfaces. Image recognition algorithms might be affected as well by illumination changes and shadows in particular (e.g., [32]). In addition, object tracking algorithms, e.g., cars and pedestrians, may be confused by the presence of shadows and yield false object contours (see [43]). For these reasons, shadow removal, whether from video stream or a single still image, is an important research problem, and developing an effective shadow removal algorithm can help in improving the results of other fundamental algorithms in computer vision if applied as a pre-processing step. Additionally, shadow removal might be desired from an esthetic perspective, that is for improving image appearance. Many shadow removal and illumination invariance algorithms aim at re- 1

11 moving shadows for the purpose of enhancing the image as a pre-processing step, e.g., prior to segmentation or recognition. As a pre-processing step, these algorithms may use various assumptions related to the specific algorithm they are designed to enhance. For example, in [29], the authors describe a method for shadow and highlight removal intended to help segmentation algorithms. The purpose of this algorithm is to make coherent surfaces appear similar, regardless of whether they include shadow or highlight regions. However, the shadow-free images produced by this method seem un-natural since the shadow regions are simply assigned the color of non-shadow neighboring regions of the same material. Another example is the algorithm described in [32] which is used to compensate for illumination changes in facial images prior to face detection algorithms. As with the previous example, the result of this algorithm is un-naturally looking images. In this work, we concentrate on the problem of shadow removal from a single color image. Given a shadow image, the ultimate goal is to produce a high-quality shadow-free image which would seem to have been taken in the same scene but without shadows. This implies producing a shadowfree image while maintaining original local and textural information within shadow regions and penumbrae. The ability to produce photographic-quality shadow-free images, allows our proposed approach to be used not only for enhancing shadow images, but also as a pre-processing step for some of the computer vision algorithms described above. 1.2 Thesis contribution The main contributions of this work are summarized in this section. This thesis contains a comprehensive survey of the problems and challenges related to shadow removal. To the best of our knowledge, this is the first time such a survey is given in the literature, and is of high importance in understanding the problem of shadow removal and de- 2

12 veloping shadow-removal algorithms. A novel approach for shadow removal is suggested. This shadow removal approach is capable of producing high-quality results from images containing uniform or non-uniform shadows, cast on curved and textured surfaces and with wide penumbra regions. A method for enhancing shadow-free regions is presented in this work. This enhancement process significantly increases the robustness of the proposed shadow removal algorithms, and in many cases is essential for obtaining shadow-free images of high-quality. This enhancement method is independent of the shadow removal process itself and thus can be used as a postprocessing step in any shadow removal algorithm. A method for labeling penumbra pixels using a Markov Random Field (MRF) process is presented. Labeling penumbra pixels is an important step in the proposed shadow removal approach and essential in many cases for obtaining good results. An effective method for extracting shadow masks based on user input is presented in this work. The method is based on region-growing and shadow pixel classification using Support Vector Machine (SVM). 1.3 Thesis outline The outline of this thesis is as follows. In chapter 2 we give a comprehensive survey of the various problems and challenges related to the task of shadow removal. Chapter 3 contains related work and background to shadow removal. Our approach for shadow removal is described in Chapter 4. The approach includes a method to remove uniform shadows (Section 4.1), a method to remove non-uniform shadows (Section 4.2) and a method to handle penumbra regions of shadows (Section 4.3). A shadow-free region enhancement process 3

13 is described in Chapter 5. Algorithms for extracting umbra and penumbra regions in shadow images are presented in Chapter 6. Finally, experimental results are given in chapter 7 and concluding remarks in Chapter 8. 4

14 Chapter 2 Problems and Challenges in Shadow Removal In this section we enumerate the various problems and challenges related to the task of shadow removal. It is worth noting that a given shadow image does not necessarily include all the phenomena mentioned below, and indeed in many of the images we explored only a subset of phenomena occurs. However in order to develop a robust shadow removal algorithm which can effectively handle shadow images taken under different conditions and of different scene types, any shadow removal algorithm should account for the various types of possible phenomena which may affect the final result. 2.1 Physical Phenomena Physical phenomena occur in the physical world and obviously affect the digital representation of the scene. 5

15 2.1.1 Shadow Intensity A shadowed surface is part of the surface which is occluded from at least one direct light source in the scene. As a result, a reduction in light intensity is observed in shadow regions. Many methods attempt to remove shadows by first estimating (either explicitly or implicitly) the amount of intensity reduction in the shadow region (the shadow intensity) and deducing the corresponding shadow scale factors. The shadows are then removed by applying the inverse transformation on the shadow regions according to the shadow scale factors. Two possible cases may be considered with respect to shadow intensity: the first is where shadow intensity is uniform in the shadow region, resulting in a uniform shadow. The second case is where shadow intensities vary across a shadow region, yielding a non-uniform shadow. See examples in Figure 2.1. The phenomenon of varying shadow intensities usually occurs due to ambient light and is most common in scenes where the occluding object is close to (a) (b) (c) (d) Figure 2.1: Examples of uniform and non-uniform shadows. First row: Shadow images. Second row: Pixel intensities (of the green channel) of the cross-sections in each image. (a) Uniform shadow cast on flat surface. (b) Non-uniform shadow cast on flat surface. Note that the shadow is darker in the right part of the image. (c) Uniform shadow cast on curved surface. The geometry of the surface is preserved in the shadow region. (d) Non-uniform shadow cast on curved surface. Intensity change is inconsistent with the curvature of the surface. 6

16 the shadowed surface, thus less ambient light reaches the inner regions of the shadow than the outer parts. Inter-reflections are another source of nonuniformity of shadows and can be caused by the occluding object itself or by other objects in the scene. Determining shadow intensity usually involves estimation of the shadow scale factor. In the case of a uniform shadow, the scale factor is a single unknown, however in the case of a non-uniform shadow, the scale factor is spatially varying and a per-pixel estimate must be determined. Figure 2.5a shows the effects of incorrectly assuming a uniform shadow when attempting to remove a non-uniform shadow Umbra and Penumbra A shadow region can be partitioned into umbra and penumbra regions. Figure 2.2 illustrates the formation of shadow umbra and penumbra. The umbra of a shadow is the part of the shadowed surface in which the direct light source is completely obscured by the occluding object. The penumbra of a shadow is the part of the surface where the light source is only partially occluded. Shadow intensities typically change smoothly in penumbra regions when transitioning from the umbra to the non-shadowed region of the surface. Penumbra occurs when the light source is not a point source (as in Figure 2.2) or due to diffraction of light rays caused by the occluding object [26, 37]. Regardless of whether the shadow intensity is uniform or not in the umbra, by definition, shadow intensities vary in penumbra regions. The width of the penumbra, as well as the rate of illumination change across the penumbra, vary in a given shadow region and among different shadow regions (consider Figures 2.3 and 2.4). In some cases penumbra width is very small and difficult to detect in digital images, in which case the penumbra is referred to as a hard shadow edge. However, in many natural images the penumbra is noticeably wide, thus special handling in the shadow removal process might be required. 7

17 Figure 2.2: Umbra and penumbra of a shadow. Figure 2.3: Varying width of shadow penumbra. Left: Shadow image. Right: Plot of penumbra width along shadow boundary, in the direction depicted by the arrows in the left image The Light Source The type and shape of the light source is another factor which may influence shadow removal algorithms. Algorithms that assume a certain spectral power distribution, for instance a Planckian light source [18], may fail in handling indoor shadow images acquired under artificial illumination. In cases where the light source is not a point-source or more than one light source exists, as 8

18 Figure 2.4: Examples of different penumbra profiles. can occur in indoor images for example, complex soft shadows may appear that affect shadow removal algorithms considerably. 2.2 Scene Characteristics In addition to the physical phenomena described above, the nature of the acquired scene and the objects in it can also play a crucial role in shadow removal. While it is fairly reasonable to assume certain behavior of illumination and subsequently of shadows, e.g., that shadow intensities inside the umbra are locally constant ([49, 52]), it is unrealistic to assume global behavior of all possible scenes. This implies that in the context of shadow removal, the complexity of a shadow image is derived virtually from the complexity of the acquired scene. The type of surfaces in the scene, object geometry and configuration and their interaction with the light source, all have influence on shadow removal algorithms. In this section we describe how different scene types may affect shadow removal algorithms and contribute to the complexity of the problem. 9

19 (a) (b) (c) (d) Figure 2.5: Examples of problems in shadow removal. (a) Removing non-uniform shadow assuming uniform-shadow intensity. (b) Absence of self-shading in the shadow-free region. (c) Enhancement of noise in the shadow-free region. (d) JPEG artifacts Self Shadows and Shading Shadows and shading take prominent roles in our visual understanding of the world. They supply numerous cues which assist in depth perception, object positioning [34] etc. Our perception of object geometry is also greatly affected by shading and illumination. In particular, it is the self shading which gives us the strongest cues about object geometry. Since self shadows and shading usually arise from a direct light source and rarely by ambient light, they are absent in shadowed regions where the light source is occluded, causing information loss in those areas. Removing the shadow does not restore the shading cues since the information is inherently missing in the original shadow image. In these cases an unnatural shadowfree image is often produced. An example is shown in Figure 2.5b. 10

20 2.2.2 Complexity of the Shadowed Surface An inherent goal of any shadow removal algorithm is to produce shadow-free images of high-quality in which the reconstructed shadow regions of a surface appear similar to the non-shadow regions of the same surface. Several factors related to the surfaces on which the shadow is cast upon contribute to the complexity of the shadow removal problem. First, algorithms that rely on mean pixel values would probably fail in handling shadows that span different surfaces, since this implies differences of pixel value statistics. An additional factor to consider is the textural content of the shadowed surface. Since textured surfaces usually incorporate high-order statistical information, removing shadows from textured surfaces using a linear shadow removal process might not yield satisfactory results. For example, in addition to using scale factors for removing a shadow from a given image, the variance and higher order statistics must be reconstructed in the shadow-free region. To further illustrate this, consider Figures 2.5c and 2.5d. As can be seen in these images, the mean values are correctly reconstructed (using the scale factors approach), however the shadow-free regions still contain artifacts that require a high-order reconstruction. Another example is shown in Figure 2.6 where the mean value is reconstructed in the penumbra but high order textural information is lost. Figure 2.6: Shadow penumbra reconstruction - mean value is reconstructed correctly but high order textural information is lost. In addition, shadow surfaces with texture may undergo various image 11

21 processing transformations that would require special handling in the shadow removal process. This issue is further discussed in Section Geometry of the Shadowed Surface Shadows cast on curved surfaces might also pose problems for shadow removal algorithms. Linear methods that rely on first-order statistics can fail in removing shadows on curved surfaces since first-order statistics such as mean values, vary across curved surfaces in shadow regions as well as in non-shadow regions (see Figures 2.1c and 2.1d). An example of removing shadow cast on curved surface using pixel statistics is given in Figure 2.7 and Figure 4.1. (a) (b) (c) Figure 2.7: Example of problems with gradient domain shadow removal from curved surface. (a) Shadow image. (b) Shadow-free image obtained by nullifying shadow gradients. Note that the text at the shadow boundary is missing since reflectance gradients are also nullified. Also note that the geometry of the surface is not preserved (consider the corresponding cross section). This is due to the fact that the scale factor is estimated using only first-order pixel statistics. (c) Shadow-free image obtained using our method. Note that the geometry and texture of the surface is preserved since a high-order model is used without nullifying image gradients. 12

22 2.2.4 Intersection of Shadow and Reflectance Boundaries The process of shadow removal invariably involves dealing with the shadow boundary. This necessitates distinguishing between shadow edges and reflectance edges. Reflectance edges that cross or coincide with shadow edges (whether sharp shadow edges or wide penumbra) must be restored in consistency with the same type of reflectance edges outside the shadow. This difficulty is especially significant for algorithms that work in the gradient domain, since the gradients in such edges are composed of both reflectance and shadow changes ([52, 35]), thus requiring the algorithm to modify only the shadow term of the shadow edge gradient. Figure 2.7 demonstrates the effect of information loss when nullifying shadow gradients that intersect with reflectance gradients. 2.3 Image Acquisition and Processing The phenomena described above occur in the physical world independent of the capturing device. In this section we describe issues related to the acquisition and digitization pipeline which may influence shadow removal algorithms Capturing Device Perhaps the most prominent influence of the capturing device in shadow images is the presence of sensor noise introduced in dark shadow regions, yielding low signal to noise ratio. While this noise may be scarcely visible in the original shadow image due to low intensities, it may be enhanced by the shadow removal algorithm and strongly affect the quality of the final shadow-free image (see Figure 2.5c). Information loss also occurs due to clipping of pixel intensities caused by 13

23 the limited range of camera sensors (consider Figure 2.8 for an example), as well as by quantization of similar valued dark points in shadow regions into the same quantization bin. Shadow removal algorithms that scale the values in shadow regions do not overcome the quantization effects and might produce clipping artifacts and possibly false contours. Figure 2.8: Example of intensity clipping in shadow pixels. The intensity variations due to the brick texture in the shadow region along the marked line, is lost due to clipping and bin-ing of dark values Post-processing All cameras, whether high or low end, perform some form of image processing within the acquisition-pipeline, in an attempt to produce high-quality and pleasing images. This includes producing high-quality shadow images. The acquisition pipeline typically involves processing such as color balancing, tone mapping and highlight and shadow toning. While improving the quality of shadow images such transformations may pose challenging problems to shadow removal algorithms. These transformations are often inconsistent with the shadow model and the processing used by shadow removal algorithms. Figure 2.9 shows an example where image contrast enhancement of the original image produces unpleasing contrast effects in the shadow-free image. In addition to the processing within the acquisition pipeline, images are commonly compressed by the camera into some standard output format - 14

24 (a) (b) Figure 2.9: The effect of image processing on shadow-free regions. (a) Details are enhanced in the shadow region, yet the image appears natural and pleasing. (b) Removing the shadow yields an unnatural image in which the contrast in the shadow-free region is inconsistent with the non-shadow region, as depicted in the corresponding cross sections. the most popular by far being the JPEG compression standard. JPEG compression introduces noticeable artifacts in images. While these artifacts may be unnoticed in dark shadow regions, they become a real problem in shadow removal as their appearance may be enhanced as the result of the shadow removal process, as can be seen in Figure 2.5d. 15

25 Chapter 3 Background 3.1 Related work Shadow detection and removal has been approached from numerous aspects including shadow detection and removal from multiple images and video sequences [48, 8, 36], based on special camera filters [16], using a reference background image for shadow removal from interactive projected scenes [22] and based on object models - in the context of tracking, e.g., cars and pedestrians [28, 5, 38, 43]. Shadow removal from a single image involves two basic stages: detection of shadow regions, typically expressed in the form of detecting shadow edges, and the removal of shadows from an image. The problems and issues related to shadow removal described in Chapter 2 affect both the detection and the removal of shadows. Several methods for automatic detection of shadow boundaries given a single image have been proposed in recent years [18, 40, 29, 3]. Shadow removal methods for a single image can be classified into two categories: methods operating in the gradient domain [18, 52, 35, 20, 15, 13, 12] and methods operating in the image intensity domain [49, 21, 2, 1, 14, 50, 23]. In the next sections we outline previous work related to the detection and removal of shadows. 16

26 3.1.1 Automatic Shadow detection methods A method that employs color ratios was suggested by Bernard and Finlayson [3]. The method detects shadow segments by first segmenting the image, and then based on several tests, assigns a score for each segment boundary representing its probability of being a shadow boundary. This method assumes certain types of indoor and outdoor illuminants, and that a diagonal model of illumination change holds [19]. Motivated by the problem of color constancy, Finlayson and Hordley proposed a model for illumination invariant images in [17]. This approach is based on the observation, and limited to the assumption, that under Planckian light source, and given a narrow-band-sensor camera, variations in illumination for a given surface lie along a single direction in a 2D log-chromaticity difference space. This allows the authors to obtain a 1D illumination image by projecting the 2D representation onto a line in a certain direction. Later, Finlayson et al. suggested to employ this illumination invariant model to extract shadow edge information from a single image [18]. The basic idea in this method is that edges that are due to shadow appear in the original image but not in the illumination invariant image. Salvador et al. suggested a method for identifying and classifying shadow regions in a single image in [39, 40]. The authors assume in their work that shadows are cast on nearly flat and non-textured surfaces, objects are uniformly colored and a strong, single light source, illuminates the scene. The suggested method extracts two edge maps from a given image: the first from the luminance image and second from the image represented in a shadow invariant color space. These two edge maps are then used in a classification step for deriving the final shadow masks. This classification step incorporates several assumptions, such as the existence of a single object in the scene and that object and background colors differ. Another approach for automatic shadow detection is based on machine learning. A method for classifying edges as shadow and non-shadow was 17

27 presented by Levine and Bhattacharyya in [29]. In this study, the authors suggested to use a Support Vectors Machine [10] (SVM) to classify shadow and non-shadow edges based on color-vector differences. In the training phase, samples of shadow and non-shadow edges are extracted in a supervised manner, and color-vector differences are calculated using pixels along the edges, on both sides. In the classification phase, the edges of a given image are classified as shadow edges if most color vector differences corresponding to these edges are classified as shadow differences in the trained SVM. The final shadow masks are derived by incorporating the shadow edge information into a segmentation process. Obviously, this approach requires an off-line training phase over an extensive image set for covering different illumination and reflectance types of surfaces Gradient-based shadow removal methods Shadow removal based on the gradient domain was suggested by Finlayson et al. in [18, 15]. The core idea in these studies is to nullify gradients of shadow edges and then to reconstruct the shadow-free image by integration, assuming a certain type of light source and special properties of camera sensors. In [13] and [12], studies that relax the camera properties assumption are presented. While making a significant impact in automatic removal of shadows from a single image, this approach suffers from an inherent problem of the gradient based shadow removal algorithms, which is related to the global integration step [48]. The integration usually results in artifacts such as changes in color balance and global smoothness of the reconstructed image (see Figures 7.11 and 7.12). Being aware of the problems due to global integration, Fredembach and Finlayson suggested a shadow removal algorithm [20], in which 1D integration is performed instead of global integration. Although impressive results are presented, the nullification of shadow edge gradients causes textural information loss in penumbra regions (see Figure 2.6) which must be restored artificially, by in-painting techniques 18

28 [20, 15], or simply left as nullified gradients [18]. Shadows cast on curved surfaces with wide penumbra regions are also strongly affected by nullification of shadow edges, as illustrated in Figure 2.7. In an attempt to develop gradient domain shadow removal algorithms that still preserve textural information in penumbra regions, Xu et al. [52] and Mohan et al. [35] suggested shadow removal methods that naturally extend that of Finlayson in [18]. The method described in [52] reconstructs the penumbra regions by clipping large gradients, assuming they are due to object boundaries whereas small gradients are due to changes in illumination. In [35], the authors suggest a user-guided approach that attempts to model soft shadow edges in the intensity domain, assuming symmetric, sigmoidallike behavior of shadows across penumbra regions (consider Figure 2.4 for a counter example), and then using the derivatives of the shadow model to remove gradients that are due to shadow boundaries. While the examples given in [52] and in [35] (see Figure 7.11 and Figure 7.12) show that these methods are effective in handling shadows with soft boundaries, being based on the gradient domain introduces inherent problems with the approach. In addition to the color balance and smoothing effects issues mentioned above, the gradient domain methods modify only the gradients in penumbra regions, thus these methods cannot handle nonuniform shadows, as this implies changes in illumination inside the shadow region, and not only at the shadow boundaries. Additionally, post-acquisition image processing transformations may introduce artifacts in umbra regions, thus the gradient based algorithms do not handle such artifacts as well Intensity-based shadow removal methods Another approach to shadow removal from a single image is based on the intensity domain. A simple intensity domain shadow removal method was proposed by Baba et al. in [2] and [1]. The method is based on color and variance adjustment of shadow pixels in the RGB space, assuming a single 19

29 flat shadow surface. The authors of [11] describe a method in which light occlusion factors are used for shadow removal. Occlusion factors are estimated in the intensity domain, and then smoothed in the gradient domain to obtain a smooth shadow mask. The initial estimation of the occlusion factors is obtained by assuming planar and roughly constant-valued surfaces on which shadows are cast. Two intensity domain methods have been proposed by Finlayson et al. in [14] and [21]. In the study described in [14] the Retinex theory is employed for shadow removal where large changes in intensities that are due to shadow boundaries are ignored in the Retinex computation. The method in [21] is based on the estimation of shadow scale factors assuming uniform shadow intensities and hard shadows. Both methods use in-painting for completion of missing information in shadow boundary regions, caused by the shadow removal process. In [23], a study for shadow removal that uses a Pulse Coupled Neural Network is presented. As with the studies of [21, 2, 1], this method relies on first-order statistics for determining a single scale factor of shadow regions, thus cannot properly handle non-uniform shadows and shadows cast on curved surfaces. Wu et al. [49, 50] describe a method for shadow removal in the context of shadow matting. In this study shadow intensities are estimated based on shadow and non-shadow intensity ratios in the umbra. A Baysian framework is used for regularization of shadow scale factors in umbra and penumbra regions. This method is capable of removing soft shadows while preserving the texture in shadow boundaries, assuming a roughly uniform shadow cast on a flat surface. Although high-quality results have been demonstrated by some of the studies described above, they do not seem to handle some of the fundamental problems described in Chapter 2, namely non-uniform shadows, shadows 20

30 cast on curved and textured surfaces and shadows with wide penumbra regions. Furthermore, during our experiments we observed that many shadow images undergo post-acquisition transformations which severely affect the final shadow-free results. Previously suggested shadow removal methods do not address this problem at all. 3.2 Markov Random Fields Markov Random Fields (MRF) are commonly used in various image-processing and computer vision tasks such as segmentation, edge detection, texture synthesis, pattern matching and surface reconstruction (a deep overview of MRF can be found in [31]). The strength of Markov Random Fields is in their ability to capture high-level image characteristics in rather low level representations. In the most basic formulation of a Markov Random Field, each pixel in the image corresponds to random variable in the field. We say that the field is a Markov Random Field if the probability of each random variable for being assigned a certain value depends only on its current value and the values of a finite set of other random variables. An example for such set is the variables corresponding to the pixels in the 4-neighborhood of the original pixel. When using Markov Random Fields, often the goal is to derive a labeling of the random variables such that some high-level information that exists in the underlying image is easily extracted from the labeling configuration. For example, a field of random variables over the domain {0, 1} can be used in order to perform edge detection, such that pixels whose corresponding random variables are labeled 1 are edge pixels and the others are non-edge pixels. A common approach for obtaining labeling configurations for the problem in hand is to define some energy function over the Markov Random Field and find a labeling configuration that minimizes the energy function. It is worth 21

31 noting that in many cases only local minima can be found. Usually, the energy function is referred to as the posterior energy, and it consists of two terms: 1) the prior energy, capturing some a priori knowledge or assumptions about typical images of the problem, and 2) the likelihood energy, expressing the probability of a given random variable being assigned some label as a function of other random variables. Finally, the labeling configuration that minimizes the posterior energy can be found using iterative methods such gradient descent or simulated annealing, where in each iteration the posterior energy is calculated for each random variable separately and summed over all the variables to yield the total energy of the given labeling configuration. 3.3 Histogram Specification An image histogram holds the occurrence probability of gray levels in the image. Many image processing techniques are based on histogram operations. For example, a widely known and simple technique for contrast enhancement is histogram equalization. Histogram equalization is just one example for histogram modeling techniques. Another example for histogram modeling is histogram specification (also known as histogram matching). The idea of histogram specification is to transform the histogram of one image into the histogram of another image. In the continues domain, histogram specification involves finding the inverse of the cumulative distribution function of the target image histogram [24]. In the discrete domain, only an approximation of the target histogram can be obtained due to the quantization of gray-level values. Histogram specification in the discrete domain can be performed as follows ([24]). Denote u and v as two discrete random variables that take values x i and y i, with probabilities P u (x i ) and P v (y i ) (i = 0...L 1), respectively. We define the 22

32 following u w P u (x i ) (3.1) x i =0 and w k k P v (y i ), k = 0,..., L 1 (3.2) i=0 Then, each value x i of u is mapped to y ni of v such that n i is the smallest for which w ni w. Equations (3.1) and (3.2) gives practical algorithm for performing histogram specification. Note that if the values x i and y i are each sorted in ascending order, the procedure performs a non-decreasing linear mapping between u and v, since for each pair x i x i+1 it holds that n i n i+1, which implies y ni y ni Thin-plate Models Deformable models theory [45] is used to describe elastic materials such as rubber, paper and flexible metals mathematically. Using this theory, one can analyze and visualize elastic objects and how they react to external forces, such as bending and gravity. Deformable models theory also enables to model internal constraints of the materials, such as elasticity and rigidity. These models have several applications in computer graphics (e.g., visualization of dynamic environments [45]), image processing (e.g., image morphing [27] and edge detection [25, 51]) and computer vision (e.g., objects tracking [41] and surface reconstruction [44]). Thin-plate models are a special case of deformable models. Given a set P of n data-points in a 3D Euclidian space, such that each p is a tuple of the form (x, y, z), a thin-plate f is a surface that minimizes the following energy 23

33 functional: E(f) = p P f(x p, y p ) z p 2 + λ [ Ω ( 2 f x 2 ) 2 ( ) 2 2 f x y ( ) 2 2 f ]dxdy y 2 (3.3) The left term in Equation (3.3) uses the data points P as control points for the surface since in order to minimize the energy E(f), the error expressed in this term should be minimized. The right term expresses the constraint that the surface should have minimal curvature, similar to the behavior of a thin sheet of metal, hence the name thin-plate models. 3.5 Image Formulation Following the formulation in [4], an image I (x, y) is considered to be composed of the albedo R (x, y) and illumination L (x, y) fields as follows: I k (x, y) = R k (x, y) L k (x, y) (3.4) where k {R, G, B} and denotes pixel-wise multiplication. Denoting ˆL k (x, y) as the illumination field without shadows, L k (x, y) can be expressed as: L k (x, y) = ˆL k (x, y) C k (x, y) (3.5) where C k (x, y) represents the shadow intensities or shadow scale factors of channel k. This gives rise to the common shadow image formulation: I k (x, y) = R k (x, y) ˆL k (x, y) C k (x, y) (3.6) For simplicity, shadow removal is performed in the log domain, thus the shadow image formulation from Equation (3.6) is reformulated: I k (x, y) = R k (x, y) + ˆL k (x, y) + C k (x, y) (3.7) 24

34 Figure 3.1: Intensity surfaces. Shadows are perceived as valleys in the intensity surface. such that I, R, ˆL and C are the logarithms of I, R, ˆL and C, respectively. In the log domain, a shadow implies an additive change in intensities. Shadows are removed from I by first evaluating C k (x, y) and then reconstructing the shadow-free image by subtracting these values from I in the log domain, and finally exponentiating the result to obtain the shadow-free image. This process is performed independently for each channel of the RGB image. In this study, each channel of the image is considered as an intensity surface which is defined by pixel intensities, as illustrated in Figure 3.1. Under this representation, shadow regions form valleys in the intensity surface. 25

35 Chapter 4 Approach and Methods for Shadow Removal Shadow removal is typically performed in two stages: 1) the detection stage in which shadow regions are detected, specifically by determining the shadow boundaries and 2) the reconstruction stage in which the shadow is actually removed and a shadow-free image is produced. In this chapter we suggest a novel approach for reconstructing shadow-free images, i.e. removing the shadows in an image once they have been detected. Any shadow detection algorithm can be used, but since our method is not confined to images with certain illumination conditions such as outdoor scene images, one could use a shadow detection algorithm that best suits the illumination conditions in a given image. The main theme in our approach is the notion that image data should not be nullified at any stage of the process, rather image content should be preserved and, if necessary, modified. In addition to the unpleasing global effects in the image, the gradient based methods often produce irrecoverable artifacts along the shadow edge (see Figure 2.6), and they only modify the gradients of penumbra regions. We require of our method to be capable of dealing with non-uniform shadows, i.e. modify the gradients in umbra 26

36 regions as well. Thus we use an intensity rather than a gradient approach. We also require of our method to be able to handle varying penumbra width and profile, and shadows on curved and textured surfaces. Given the image formulation as described in Section 3.5, the essence of the approach is to determine the correct shadow scale factors C k (x, y) in Equation (3.7) by approximating the shape of the intensity surface in shadow-regions. We first show a method for removing uniform-shadows from a single image, and then provide a generalization of the method for handling nonuniform shadows. It is worth noting that in both algorithms the umbra pixels are reconstructed first and penumbra pixels are handled separately by a complementary process, which is described in Section Removing uniform shadows Introduction In this section, an algorithm for uniform shadow removal is presented. Although many of the shadow images we encountered do not exhibit perfectly uniform shadows but rather non-uniform shadows, solving the uniform shadow problem is still plausible since it is easier to produce high-quality shadow-free images in these cases than in the non-uniform shadow case. This is due to the fact that the shadow scale factor C k (x, y) in Equation (3.7) is reduced in these cases to a single constant value throughout the shadow region (specifically the umbra), per color channel. The essence of the algorithm presented in this section is to evaluate the correct scale factor, C k (x, y) in Equation (3.7), with which to reconstruct the shadow region. It is assumed that C k (x, y) is constant within the umbra region of the shadow. However, this is not the case across the penumbra region at the shadow boundary. Across penumbra regions, shadow intensities form a non-uniform reduction in illumination, implying non-uniform shadow scale factors. For this reason, umbra and penumbra regions are handled 27

37 separately. In Section we describe an algorithm for determining the shadow scale factor in the umbra. Complete removal of shadows is achieved by combining this algorithm with the one for handling penumbra regions described in Section 4.3. In the sequel it is assumed that the proposed algorithms are applied for each image channel separately Limitations of Existing Approaches A classical approach to uniform shadow removal in a single image is to identify the shadow edges, zero the derivatives of these pixels and then integrate to obtain a shadow-free image. Alternatively, shadow regions can be removed by adding a constant factor in the image log domain to the intensities enclosed within the shadow edge. These approaches produce good results when the shadow edges are sharp and the shadow occurs on a flat non-textured surface. However, poor results are obtained when shadows are cast on curved and textured surfaces. This is due to the fact that both textural information and surface gradient information existing at the shadow boundary are removed. To illustrate these problems with the classic approach, consider Figure 4.1. A curved and textured surface is shown in Figure 4.1a-top. Figure 4.1a-bottom shows a cross section of the image above (dark line) and a cross section of the corresponding true shadow-free image (light line). Removing the shadow using the classic approach of nullifying shadow edges, a shadow-free image is obtained as depicted in Figure 4.1b-top. It can be seen that the shadow region of the test signal (dark line at Figure 4.1b-bottom) is incorrectly reconstructed: pixel intensities are lower than they should be compared to the reference signal (light line at Figure 4.1b-bottom). This is a direct result of the assumption that the surfaces in the image are flat; by setting the shadow edge derivatives to zero, the derivatives of intensity that are due to the curved geometry are also nullified. Additionally, it can be seen that due to zeroing of derivatives, the shadow boundary regions (intervals A 28

38 (a) (b) (c) Figure 4.1: Illustration of classical gradient-based shadow removal compared to our approach. (a) An image of a curved surface with artificial shadow, with an horizontal cross section of the shadow image (dark line), compared with a cross section of the original shadow-free image (gray line). (b) Shadow-free image as obtained using classical gradient methods Shadow boundaries are nullified as illustrated by the intervals marked A and B. (c) Shadow-free image obtained using our method. Compare with the cross-sections in (b). and B in Figure 4.1b-bottom) appear almost flat in the reconstructed signal. This indicates the loss of textural information in these regions. In [20], In- Painting is used to reconstruct these pixels, however, this does not solve for the incorrectly reconstructed intensity profile due to the curved surface, and can not deal with local intensity variations that are not repetitive (consider Figure 2.7). The desired result is a shadow-free surface in which geometrical and textural information is preserved, as can be seen in Figure 4.1c, which shows the result of our proposed approach. 29

39 4.1.3 Calculating uniform-shadow scale factor In the general case, when the geometry of the objects in the scene, and accordingly the intensity surface, is not constrained, the problem of finding the correct scale factor in the umbra along with the correct scale factors in the penumbra region, is massively ill-posed. Determining the umbra scale factor of shadows without penumbra regions, i.e. hard shadow, which are also cast on planar surfaces, is rather straightforward. For example, this can be achieved by finding a scale factor which minimizes the square differences of pixels inside and outside the shadow along the shadow boundary, as proposed in [21]. Unfortunately, many shadow images exhibit shadows with wide penumbra regions which are also cast on curved surfaces, in which the simple approach for estimating umbra scale factor fails. In this work, instead of assuming a certain penumbra model (as in [35]) or modeling lighting conditions, we use geometrical information of the intensity surface outside the shadow region in order to estimate the umbra and penumbra scale factors. We only introduce a weak assumption that the second order derivatives of the intensity surface in penumbra regions along with regions surrounding the penumbra should be smooth. This is equivalent to requiring that the intensity surface in the penumbra regions should locally act as a thin-plate (see Section 3.4 for details on thin-plate models) in the shadow-free image. This requirement allows us to correctly estimate the umbra scale factor (see Figure 4.1c). Furthermore, this high-order model is key in allowing robust shadow removal in curved surfaces and shadows with wide penumbra. To solve for the scale factor under the smoothness constraint of the second order derivatives, we formulate the shadow removal problem as a surface reconstruction problem, where the missing data is the intensity-surface values of the penumbra pixels and the umbra scale factor. We found the thin-plate model a convenient tool for this purpose, since it allows us capture the highorder behavior of the intensity surface in penumbra and regions surrounding the penumbra. 30

40 (a) (b) (c) Figure 4.2: Umbra and penumbra masks. (a) Shadow image. (b) Penumbra mask (M p ). (c) Umbra mask (M u ). Mathematically, we consider each channel of the image as a continuous surface g defined in a 3D Euclidean space over the rectangular domain Ω = [0, 1] [0, 1], that is g : Ω R. Note that the surface g represents the intensity surface and not the actual geometry of the objects in the scene, although the two are correlated. Let M u and M p denote umbra and penumbra masks, respectively (consider Figure 4.2 for an illustration). Assuming the shadow scale factor in the umbra, denoted as c, is known, an intensity surface f, where the shadow is removed in the umbra can be obtained according to Equation (3.7) as follows: f (x, y) = { g (x, y) + c g (x, y) (x, y) M u otherwise (4.1) In our approach, the unknown c in Equation (4.1) is determined by combining the shadow uniformity assumption and the smoothness of second order derivatives assumption. Namely, the scale factor c is a value that when added to the umbra pixels, minimizes the following energy functional of the reconstructed thin-plate f: E(f) = E d (f) + E s (f) (4.2) such that E d (f) and E s (f) are functionals representing both assumptions: 31

41 shadow uniformity and smoothness of second order derivatives in penumbra regions, respectively. The assumption of uniform shadow scale factor is expressed by the following energy term: E d (f) = ω(x, y) f(x, y) [g(x, y) + C(x, y)] 2 dxdy (4.3) Ω where: ω(x, y) = { 0 (x, y) M p 1 otherwise (4.4) and C(x, y) = c if (x, y) M u and 0 otherwise. The energy E s (f) related to the smoothness and curvature of the intensity surface (and in particular in the penumbra region) is calculated as follows: E s (f) = ( ) 2 2 ( ) f 2 2 f [ x 2 x y Ω ( ) 2 2 f ]dxdy (4.5) Equation (4.3) expresses the requirement of preserving the original information in the umbra in terms of texture and curvature, and preserving the original values outside the shadow regions. This is achieved by penalizing values f(x, y) that differ from values g(x, y) + C(x, y) inside the umbra. In addition, since C(x, y) = 0 in non-shadow regions, equation (4.3) penalizes for non-shadow pixel values that differ from the corresponding original values. Equation (4.5) penalizes intensity surfaces in which second order derivatives are not smooth, i.e. intensity surfaces with high curvature. Obviously, this term embodies the ability of the approximation model to capture the global geometry of the surface, i.e. to determine the correct scale factor in curved surfaces. Both equations give rise to a model describing thin-plate behavior of the intensity surface. Formulating the problem in the discrete domain is straightforward. The surface g is defined as a lattice consisting of pixel values, i.e. I(x, y), x, y Z y 2 32

42 and Equation (4.5) is calculated using finite differences An Efficient Algorithm for Scale Factor Derivation Even when the value c is given, finding f that minimizes E(f) in the discrete domain is a computationally intensive task even for moderate-sized images, as it requires the solution of a linear system consisting of a linear equation for each image pixel. There are several studies that address the computational problem of surface reconstruction, e.g., by using finite element methods [9] or multi resolution approaches [33]. Nevertheless, we developed a light-weight approximation to the proposed thin plate model using 1D cubic smoothing splines, which are known in their robustness in fitting noisy data while maintaining the continuity constraint of first and second order derivatives. The splines are constructed based on the shadow edge and penumbra region as illustrated in Figure 4.3. Given an image with shadow as in Figure 4.3a, we create a 1D spline for each pixel of the shadow edge. Each spline is extended bidirectionally from its associated pixel in directions perpendicular to the shadow edge as seen in Figure 4.3a. The extent of the spline was found to be dependent on the thickness of the penumbra region. In our experiments we set the extent to be three times the width of the penumbra. Note that, according to Equation 4.3, all data sites of the splines that are within the penumbra region (according to M p ) are ignored, as illustrated in Figure 4.3c. Let s : R R 2 denote a line in parametric representation and let t be the line parameter. Then s(t) is the coordinate of the line in the image plane, g(s(t)) and C(s(t)) are the pixel intensity and shadow scale factor at s(t), respectively. In addition, denote f(t) as the corresponding 1D cubic smoothing spline, such that f(t) is the spline value at coordinate s(t). Each cubic smoothing spline f is a curve minimizing the energy functional (Equations (4.2)-(4.5)) calculated in 1D, with respect to the 1D sampled image 33

43 (a) (b) (c) Figure 4.3: Finding shadow scale factor using splines. (a) Splines layout along the shadow edge when projected on the image plane. (b) Penumbra mask in which penumbra pixels are labeled in white. (c) Final sampling sites of the splines. Penumbra regions are not sampled. data g(s(t)): E(f) = where: ω(t) f(t) [g(s(t)) + C(s(t))] 2 dt + ω(t) = { 0 s(t) M p 1 otherwise ( 2 f t 2 )2 dt (4.6) (4.7) and C(s(t)) = c if s(t) M u and equals 0 elsewhere. The term c is the scale factor within the shadow region and is constant and identical for all splines. The total energy for the set S of splines is denoted Ê and calculated as: Ê = f S E(f) (4.8) Evaluating the unique scale factor c for the shadow region is performed by minimizing the energy in Equation (4.8) using a gradient descent algorithm over c. Note that Ê as a function of scale factors is always convex, thus the gradient descent algorithm produces a global minimum. This stems from the fact that each spline energy function in Equation (4.6) is convex [47], and by assuming a uniform scale factor in the umbra, i.e. all splines get their mini- 34

44 mum energy at the same time, the sum of all spline energies (Equation (4.8)) is also a convex function. In this manner we obtain the correct scale factor c = C(x, y) for the umbra region of the shadow, taking into account the curved intensity-surface which may cause non-zero differences between pixels on either side of the shadow boundary. Given the scale factor, the shadow-free image can be reconstructed inside the shadow region (excluding the penumbra) using Equation (3.7). Although the thin-plate allows estimating the correct scale factor in the umbra and in many cases correctly approximates the geometry of the shadow-free surface in the penumbra, in itself can not be used as the intensity-surface in the penumbra. This is because it is smooth (due to the smoothness constraint) and does not capture the textural information of the surface which appears in the form of local variations in intensity. The final shadow-free image is obtained by handling the penumbra regions according to the method described in Section 4.3. Obviously, reconstructing the umbra region by adding a constant scale factor to all umbra pixels using Equation(3.7) preserves the textural information. An example of reconstructed umbra is given in Figure 4.4. (a) (b) (c) Figure 4.4: Reconstructing umbra region of a uniform shadow. (a) Shadow image. (b) Applying shadow scale factor to the umbra region. (c) Penumbra pixels are left untouched (bold line). 35

45 4.2 Removing Non-uniform Shadows Introduction Non-uniform shadows are caused by varying intensities of ambient light reaching the shadow regions, as well as by inter-reflections of illumination from other objects in the scene. It is observed that in uniform shadows, the original geometry of the intensity surface in shadow regions is preserved, whereas in non-uniform shadows, the perceived intensity surface geometry is affected by the amount of illumination reduction in the shadow. This is illustrated by the examples of uniform and non-uniform shadows given in Figure 2.1. While the method described in Section 4.1 as well as recent intensitybased methods demonstrate high-quality results (see Section 7.1 and [21, 49]), they all assume, whether implicitly or explicitly, uniform shadows. Removing non-uniform shadows while assuming a global scale factor inside the umbra, usually results in noticeable artifacts in the shadow-free regions as shown in Figure 4.5. We argue that a substantial percentage of images contain non-uniform shadows. Thus, in order to produce high-quality shadow-free images, the assumption of uniform shadows must be relaxed. The proposed method in this section is based on the method described in Section 4.1. In this section, the thin-plate model is used to approximate the entire intensity surface. Using a single model for approximating the entire intensity surface in the shadow region allows an estimate of a perpixel scale factor, and gives rise to a robust shadow removal algorithm which is capable of handling non-uniform shadows cast on arbitrary-shaped and possibly textured surfaces Intensity Surface Approximation In the case of non-uniform shadow, the shadow scale factor C (x, y) (Equation (3.7)) can not be assumed as constant and is spatially dependent. Deter- 36

46 (a) (b) (c) (d) Figure 4.5: Examples of removing non-uniform shadows with global (uniform) scale factors. (a) Images containing non-uniform shadows. (b) Cross sections along the lines in the shadow images. (c) Shadow removal using global scale factor. (d) Cross sections of the shadow-free images along the lines in (a). mination of shadow scale factors C (x, y) is performed by deriving an approximation of the shadow-free intensity surface within the umbra and penumbra regions, using a thin-plate surface model. Again, the motivation for adopting the thin-plate model for the problem of non-uniform shadow removal lies in its ability to capture the global geometry of a surface, given relatively few anchor points on the surface. This approach allows effective removal of shadows cast on curved surfaces, as well as textured surfaces. Since the intensity surface approximation is calculated over the entire 37

47 umbra region, as opposed to the method described in Section 4.1, Equations (4.3) and (4.4) are modified accordingly. Let M s and M r denote shadow and surround masks, respectively (Figures 4.6b and 4.6c). Note that now M s includes both umbra and penumbra regions of the shadow, that is M s = M u M p defined in Section 4.1. Furthermore, both umbra and penumbra regions are approximated during the reconstruction process. The smoothness energy is calculated according to Equation (4.5) given in Section 4.1. (a) (b) (c) Figure 4.6: Masks of a shadow image. (a) Shadow image. (b) Shadow mask M s (c) Surround mask M r. In order to capture the geometry of the intensity surface in the shadow region, the approximated surface should coincide with the intensity surface in the non-shadowed regions surrounding the shadow. Thus, the pixels surrounding the shadow region in the intensity surface are used as data sites for the thin-plate approximation, and the following data term is minimized over the surround pixels defined by mask M r, replacing the one given in Equation (4.3): E d (f) = ω r (x, y) [f (x, y) I (x, y)] 2 dxdy (4.9) Ω 38

48 using the ω r defined as follows: { ω r (x, y) = 1 (x, y) M r 0 otherwise (4.10) Note that C(x, y) is not solved directly in Equation (4.9) as in Equation (4.3), but rather implicitly by approximating the intensity surface in M s. An approximation of the shadow-free intensity surface can then be found by solving the following functional over Equation (4.5) and the newly defined energy given in Equation (4.9): f = argmin ˆf E s ( ˆf) + E d ( ˆf) (4.11) Note that since the pixels of M s are not used as data sites in Equation (4.9), the geometry of the approximated shadow region is influenced solely by the pixels in M r, under the smoothness constraint imposed by Equation (4.5). This can be seen in the example given in Figure 4.7. Note that the approximated surface in the example follows the global geometry of the surface, emphasizing the advantage of using the thin-plate model in reconstructing shadow-free images from curved surfaces. (a) (b) (c) Figure 4.7: An example of an approximated shadow-free intensity surface. (a) Shadow image. (b) Intensity surface view of the shadow image. (c) Approximated shadow-free surface. Note that the approximated surface is smooth in the shadow region. 39

49 4.2.3 Determining Smooth Scale Factors in the Umbra Solving Equation (4.11), a smooth intensity surface is obtained at pixels M s. Although the geometrical shape of the shadow-free surface is also obtained, in many cases, namely where shadows are cast on textured or highly-structured surfaces, f poorly approximates the desired shadow-free intensity surface, i.e. the textural information of the shadow region is lost, as can be seen in Figure 4.7c. Nevertheless, recall that the goal in using the thin-plate approximation is not to find the exact shadow-free intensity surface, but rather to determine the shadow scale factors. By assuming that the scale factors field C (x, y) in Equation (3.7) is smooth in the umbra, the shadowfree approximation of the intensity surface gives sufficient information for obtaining a shadow-free image of high-quality. Once the shadow-free intensity surface f has been approximated, a scale factors field can be derived by a simple pixel-wise subtraction: C (x, y) = f (x, y) I (x, y) (4.12) over the mask M s. Note however that the resultant C (x, y) may be non-smooth since it incorporates the fine details of I (x, y), as demonstrated in Figure 4.8a. Naively smoothing C (x, y) prior to applying it to I (x, y), e.g., using homogenous smoothing, will most likely yield unsatisfactory results and introduce artifacts in the shadow-free image such as Mach bands at former shadow boundaries and in textured regions, as can be seen in Figures 4.8b and 4.8c. Ideally, only the structure of the intensity surface should be considered when calculating C(x, y) in Equation (4.12). In other words, we would like to obtain a smoothed version of I(x, y) in which edges are preserved accurately. Several techniques have been tried for extracting the structure of the intensity surface. These include smoothing with Gaussian kernels, directional smoothing and bilateral filtering [46]. Unfortunately, none of our 40

50 (a) (b) (c) Figure 4.8: Shadow removal result when smoothing the scale factors image naively. (a) Scale factors image of one of the channels. Note the undesirable high frequency content. (b) Shadow-free image containing Mach bands and slightly smoothed texture in the shadow-free region. (c) Another example: Mach bands in the shadow-free text. experiments using these techniques produced satisfactory results, namely the resultant images where either not smooth enough or edges were changed to a degree affecting the final scale factors image. In addition, we also tried to employ the thin-plate approach for obtaining smooth intensity surfaces by sub-sampling the original image and completing the missing information as performed in equation (4.11). However, the subsampling approach did not produce satisfactory results either. The source of the problem is in considering pixels of varying intensities (arising in textured surfaces) as data points for the thin-plate minimization. Large variation in the intensity pixels affects the thin-plate approximation both outside and inside the shadow region. Outside the shadow region, namely in M r, pixels that differ considerably from their surrounding pixels have a bending effect on the approximated intensity surface, due to the data fitting term in the thinplate formulation (Equation (4.9)) in conjunction with the smoothness term (Equation (4.5)). Inside the shadow region (i.e. M s ), the variation in pixels intensities affects the smoothness of the resultant C(x, y) in Equation (4.12). For instance, in Figure 4.6, the dark pixels between the bricks affect the smoothness of the reconstructed scale factors field shown in Figure 4.8a. The approach taken in this study, is to carefully select the anchor points used in the thin-plate minimization (mask M r and mask M s ). Specifically, 41

51 anchor points in M r and M s should originate from the same intensity distribution, thus supplying the same information source within the shadow and outside the shadow. For example, in the case of Figure 4.6, M r and M s should contain pixel intensities originating from the bricks and not from the darker pixels between the bricks. Selecting the appropriate anchor points is based on the assumption that shadows preserve monotonicity of pixel intensities in the umbra, i.e. the order of two non-shadowed pixels with respect to their intensities does not change when the pixels are shadowed. This is a direct result of the assumption that the illumination field is smooth and nearly uniform in the umbra, in conjunction with Equation (3.6). Moreover, although non-uniform shadow implies non-uniformity of the illumination field, shadow intensities are locally uniform in the umbra. Exploiting this property, we derived a simple heuristic that allows us to discard pixels that affect the smoothness of C (x, y). The histograms of the original M r and M s pixel values are calculated. Based on the monotonicity property, these 2 intensity distributions should display strong correlation as shown in Figure 4.9a. Discarding pixels in both M r and M s with probability below a specific threshold, produces a collection of pixels in shadow (M s ) and surround (M r ) that are likely to have originated from the same source. This collection of pixels define new shadow and surround masks and are used as anchor pixels in the intensity surface estimation. Consider Figures 4.9b and 4.9c for an illustration. It can be seen that the darker pixels between the bricks have been discarded and are not used as anchor points. In the approximation process described in Section 4.2.2, Equation (4.11) is applied using the new masks M r and M s, denoted as ˆMs and ˆM r, respectively. The scale factors field is obtained using Equation (4.12). This produces a smooth yet incomplete scale factors field, denoted as Ĉ (x, y). To reconstruct the missing values in Ĉ (x, y), namely the shadow pixels in M s not used as anchor points, Equation (4.11) is used again with ω (x, y) = 1 iff (x, y) ˆM s. This step guarantees a smooth C (x, y) field in the umbra 42

52 (b) (c) (a) (d) Figure 4.9: Calculating smooth umbra scale factors using anchor pixels. (a) Normalized histograms of shadow (left) and non-shadow (right) regions. (b) and (c) Shadow mask M s and surround mask M r respectively, composed of only anchor points, taking pixels with probability above 0.5 (as marked in (a)). (d) The resultant smooth scale factors image in the umbra region - compare with Figure 4.8a. due to the smoothness term E s in Equation (4.11). An example of a smooth scale factor image in the umbra region using the anchor points approach is shown in Figure 4.9d. 4.3 Determining Penumbra Scale Factors Due to the fact that illumination might change abruptly in penumbra regions, particularly when transitioning from shadow to non-shadow regions at hard shadow edges, the assumption of scale factor smoothness, along with the assumption of monotonicity of pixel intensities do not hold. As a consequence, the methods described in sections 4.1 and 4.2 are not applicable to penumbra regions, as the method in 4.1 assumes uniform shadow scale factor and the method in 4.2 assumes monotonicity for extracting anchor points and smoothness of the scale factors image for the final determination of the scale factors field. Width and rate of transition from light to shadow are penumbra proper- 43

53 ties which are determined by many factors such as the shape and distance of the light source, shape and distance of the occluding object and diffraction of light rays [37]. Since these parameters can not be easily extracted from a single image, a different approach should be adopted for determining penumbra scale factors in a robust way. Estimating the penumbra scale factors by interpolating values (linear or higher order) between the internal shadow scale factor and the null value outside the shadow produces incorrect results and artifacts as observed also by [21]. Another approach is to analytically model the penumbra and then use this model to calculate a smooth scale factors field, using the substraction scheme of Equation (4.12). A method for obtaining smooth models of penumbra profiles was suggested in [35], in which the authors assume a symmetric, sigmoidal-like shape of penumbra cross sections. However, as can be seen in Figure 2.4, profiles of penumbra regions do not necessarily follow a particular model such as linear or sigmoidal. In the proposed approach, instead of assuming or modeling a specific penumbra profile, we only assume that scale factors in penumbra regions are locally smooth in the direction tangent to the shadow edge. Furthermore, it is again assumed that the shadow-free intensity surface should behave as a thin-plate in the penumbra region. Thus we use the thin-plate f estimated above (by either one of the methods described in Sections 4.1 and 4.2) to reconstruct the penumbra region. Assume that the penumbra region is known (a method for determining the penumbra mask is detailed in Section 6.2), penumbra scale factors are determined as follows. First, an initial estimation of penumbra scale factors is obtained by calculating the difference between the smooth thin plate and the penumbra pixels in the original image: Ĉ(x, y) = f(x, y) I(x, y) (4.13) Note that if the intensity surface was perfectly smooth, the estimated scale factors Ĉ(x, y) could be used to correctly reconstruct the penumbra 44

54 regions, using Equation(3.7). However for textured surfaces such a correction would eliminate the texture. In these cases, the estimated scale factors Ĉ(x, y) are not smooth and vary according to the surface texture. Hence a smoothing process is required in order produce a smoothly varying profile of scale factors across the penumbra. Homogenous smoothing of Ĉ(x, y) produces artifacts in the reconstructed penumbra region in the form of Mach Bands (consider Figure 4.10). This is due to the hard-shadow profile of the shadow edge. To overcome this problem we perform directional smoothing rather than homogeneous smoothing. (a) (b) (c) Figure 4.10: Calculating penumbra scale factors. (a) Shadow image. (b) Mach Bands effect due to homogeneous smoothing of the scale factors in the penumbra region. (c) Result obtained when performing directional smoothing of the scale factors. In our implementation we use the shadow edge information to compute the direction of the smoothing vector. Finally, the umbra scale factors calculated by one of the methods described in Sections 4.1 and 4.2 together with the penumbra scale factors calculated in this section, are combined to form a complete scale factors field, which is smooth in the umbra and locally smooth in the penumbra. The smooth C (x, y) is then used in Equation 3.7 to reconstruct the shadow-free image. As can be seen in Figure 4.10c, the proposed method for handling penumbra regions works well in preserving the textural information, without assuming a penumbra model. 45

55 Chapter 5 Shadow-free Region Enhancement As discussed in section 2.3, images often undergo various transformations in the acquisition pipeline and post processing by imaging software. These transformations often affect shadow images in a manner that is inconsistent with shadow removal algorithm assumptions so that artifacts are introduced in the shadow-free image. An example is shown in Figure 2.9 in which the shadow-free region displays high contrast compared to the non-shadow region. Furthermore, noise (e.g., sensor noise) in shadow regions is often enhanced and emphasized in the shadow-free image (see Figure 5.1a for an example). Thus there is a need for an enhancement algorithm that attenuates the noise effects in the shadow-free regions and attempts to equate their appearance with that of their non-shadow counterparts. A successful algorithm for shadow-free region enhancement should fulfill two basic requirements: it must preserve the original texture in the shadow region and it should be general enough to handle various types of transformations a shadow region might undergo, as well as handling shadow-free regions with noise. In [1] a scheme for shadow removal is suggested in which the mean and variance of pixels in shadow regions are adjusted based on pix- 46

56 (a) (b) (c) Figure 5.1: Shadow removal from an image containing noise in the shadow region. (a) Shadow-free result of the image in Figure 2.5c obtained by the proposed method. (b) Enhancement using local variance adjustment, similar to [1]. (c) Enhancement using the proposed local histogram specification. els of the corresponding non-shadow surface. While this approach improves the similarity in appearance of shadow-free regions and their non-shadow counterparts in many images, it fails in very noisy images. An example is given in Figure 5.1b. It can be seen that, although noise is significantly attenuated, the contrast in the shadow-free region is globally reduced producing unpleasing low contrast in the dark regions between the bricks. In order to achieve better results, an adaptive process should be used. In the example of Figure 5.1 the noisy brick regions should be enhanced but the dark regions between the bricks should be left untouched. Our proposed method for enhancing shadow-free regions is based on the assumption that two matching patches, one inside the shadow-free region and one outside the region, should have similar statistical behavior. Thus, we perform histogram specification [24] independently on each patch within the shadow-free region. Histogram specification was chosen since it allows controlling pixel statistics of each patch and, more importantly, it is consistent with the shadow monotonicity property discussed in Section Since surface texture is defined by the relative pixel intensities (and the corresponding spatial arrangement), performing a monotonic linear mapping (e.g., 47

57 histogram specification) on each patch, amounts to preserving the original texture. It is worth noting that since this enhancement method is performed on shadow-free images as a post-processing step, it can be used following any shadow removal algorithm. In our implementation, matching of shadowfree patches to corresponding non-shadow patches is performed using fast normalized cross-correlation (NCC) [30]. Figure 5.1c shows an example of enhancing a shadow-free region using the patch based histogram specification approach. A subtle point in performing the histogram specification is whether to preserve the original mean value of the patches in the shadow-free region. It is assumed that the correct mean values were reconstructed by the shadow removal algorithm. Thus, it is typically desired to maintain the original mean values of the patches. This is specifically true in shadow regions cast on curved surfaces, since in these cases pixel intensities vary across the shadow region. Figure 5.2 displays an example of enhancing a shadow-free region on a curved surface when the mean values of shadow-free patches are not preserved. Figure 5.2: An example for performing shadow-free region enhancement without preserving local mean values inside the shadow-free region. In the case of shadows cast on flat surfaces, performing histogram specification without preserving the patch mean values may yield equal or better results. In all our examples of shadows on flat surfaces, we witnessed an im- 48

58 provement in the final result, namely the shadow-free regions appeared more similar to their non-shadow counterparts compared to the results when maintaining the original mean values. This can lead to an automatic procedure for deciding whether or not to maintain original mean values. In cases where the intensity surface is roughly flat in the non-shadow region and in the umbra region, the original mean values should not be preserved. Otherwise, that is when the surface is curved, mean values of patches in the shadow-free region should be preserved. In our implementation, the decision of whether or not to preserve mean values of shadow-free patches was set manually based on this criterion. Two examples of shadow-free region enhancement without preserving mean values of shadow patches are given in Figures 7.11 and

59 Chapter 6 Umbra, Penumbra and Surround masks derivation 6.1 User-guided shadow masks derivation Automatic detection of shadow regions in a single image, is a challenging problem and several approaches have been proposed in recent years (see Section for a survey). Since previous analytical shadow detection methods assume certain illuminants or scene properties, and previous machinelearning based methods require a supervised training phase which usually involves manual annotation of shadow edges in dozens of images, we propose a simple yet effective method for extracting the corresponding masks of shadow regions based on user input. Since in this work we concentrate in acquiring shadow-free images of high-quality, automation is not of our primary concern but rather the quality of the final output, thus requiring the user to supply initial cues for the system seems reasonable and may fit well into photo-editing software. Shadow mask derivation is performed by region growing [6], using Support Vectors Machine (SVM) [10] for pixel classification. The RGB color space is used as the feature space for the SVM. Given a shadow image, the 50

60 user is asked to supply the coordinates of pixels (e.g., by mouse clicking) in different shadow and non-shadow surfaces. The corresponding color vectors are regarded as shadow and non-shadow observations. As opposed to the SVM-based method described in [29], our method trains the SVM using the same image on which the shadow detection is performed, thus producing a classifier specifically trained to classify shadow and non-shadow regions in the image in hand. After the SVM training phase, a region growing phase is initiated where the coordinates of the shadow observations supplied by the user are used as seeds. In each region growing iteration, new pixels which are in the 4- neighborhood of pixels already labeled as shadow, are sent to the SVM for classification. If the SVM classifies a pixel as a shadow pixel, this pixel is added to the shadow mask, i.e. the shadow mask is being updated in the corresponding location. The region growing iterations continue until no more updates to the shadow mask occur. The reason why the SVM approach taken in our study works well can be illustrated and explained intuitively by considering the examples in Figure 6.1. The top-row contains shadow images, and 3D plots of the RGB color vectors of the images are given in the middle-row. Note that the points corresponding to shadow pixels in each image (marked by red circles) are close to each other in the 3D space and can be separated from other pixels rather easily. This property is well captured by the SVM and allows it to classify shadow pixels accurately. A polynomial kernel of degree 3 was used for the SVM, and for each user selected coordinate we took the 3 3 neighboring pixels as the observation vectors. On most of our test images only 5 coordinates or less for each shadow/non-shadow region were required in order to obtain an accurate shadow mask. The shadow mask is then used to derive the surround mask (M r in section 4.2.2) by expanding a wide band along the shadow mask boundaries (e.g. using Gaussian smoothing and thresholding). In cases where shadow 51

61 Figure 6.1: Examples of user-guided extraction of shadow masks based on region-growing and SVM. Top-row: Shadow images. The circles signify the supplied shadow and nonshadow observations. Middle-row: 3D plot of the RGB color vectors in the image. The red circles mark points corresponding to shadow regions. Bottom-row: Shadow masks extracted using region-growing and SVM. boundaries coincide with object boundaries (e.g., as occurs in Figure 7.4), an additional object mask is derived in a similar manner and used for refining the derived shadow and surround masks. 6.2 Labeling penumbra pixels In order to obtain an accurate estimation of the shadow scale factors using the methods suggested in this study, it is imperative to correctly determine the penumbra regions. Incorrect labeling of penumbra pixels as non-shadow pix- 52

62 els or as internal shadow pixels may produce an incorrect scale factor estimation. This is illustrated in Figure 6.2; When computing the thin-plate splines in the method described in Section 4.1, the fitting term (Equation (4.3)) of the energy functional forces the spline (dashed line in Figure 6.2) to fit the incorrectly labeled penumbra pixels (depicted as circles on the solid line in Figure 6.2), thus resulting in an attenuated scale factor. A similar effect may occur with the method described in Section 4.2. Figure 6.2: The effect of incorrect labeling of penumbra pixels. Solid line: the original shadow signal. Dashed line: reconstructed spline using incorrectly labeled penumbra pixels(marked by small circles), resulting in a smaller scale factor than expected. Dotted line: correct spline reconstruction when penumbra width is correctly determined. In an attempt to correctly label penumbra pixels, we consider image gradients. It has been shown [42, 48], that natural images are characterized by the sparsity of their gradient magnitudes field. This implies that for any image pixel, the probability of it being an edge pixel, and in particular a shadow edge pixel, is relatively low. Based on this insight, we introduce a method to accurately label penumbra pixels. Consider an image I of size N M and its gradient magnitude field I. The gradient magnitude distribution image P I is calculated as follows: P I xy = Pr ( I(x, y) ) (6.1) 53

63 where Pr ( I(x, y) ) is the probability of the gradient magnitude at pixel (x, y) in the image, and is calculated using the histogram of I(x, y). At this stage, simply labeling penumbra pixels by direct thresholding of P I produces many false positive and misses. Instead, our suggested approach exploits pixels with strong evidence of being edge pixels, and propagates this evidence to their neighboring pixels. Thus low-evidence pixels are supported by neighboring edge pixels. We implement this scheme using a Markov Random Field (MRF) [31] over the image P I such that a unique random variable is associated with each pixel in P I. The random variables are defined over the domain {1, 0}, denoting whether the underlying pixel should be labeled as an edge pixel or not, respectively. Thus, the probability of a pixel to be labeled as an edge pixel depends not only on its probability in P I but on the probabilities of its neighboring pixels as well. Let F xy be the MRF random variable at location (x, y). We define the posterior energy [31] of the field F as follows: (1 F xy )[(1 Pxy) I + x,y F x y N xy ψ(f x y, F xy)] + αf xy (6.2) where N xy represents the 4-neighborhood of pixel (x, y). The term (1 P I xy) is the prior energy related to the probability of a pixel being an edge pixel. ψ(f x y, F xy) is the likelihood energy of a pixel which depends on its neighboring pixels: 1 (F x y F xy Px I y P xy I t1) ψ(f x y, F xy) = (F x y = F xy Px I y P xy I t2) 0 otherwise (6.3) The likelihood energy penalizes for neighboring pixels differing in label but with similar edge probabilities, and neighboring pixels of similar label but differing in edge probabilities. The defined MRF is parameterized by α, t1 and t2. Parameter α bounds the local energy of a pixel when labeled as an 54

64 edge pixel. Parameters t1 and t2 are thresholds, t1 < t2. Although there exist numerous methods for automatically estimating optimal parameter values [31], we set α, t1 and t2 manually: t1 = 0.05, t2 = (max P I min P I ) t1 and α = These values produced good results on all our test images. A description of the labeling algorithm is given in Appendix A. Given the minimizing F, we extract the penumbra pixels by exploiting shadow edge information, namely by finding binary connected components in F originating from pixels that appear both in the shadow edge image and in F. Figure 6.3 contains several examples for penumbra pixels labeling using the proposed MRF process. 55

65 Figure 6.3: Penumbra pixels labeling using MRF. Left-column: Shadow images. Middlecolumn Gradients distribution images. (Right-column) Penumbra masks produced by the suggested MRF process. 56

66 Chapter 7 Experimental Results In this chapter we give example results of our shadow removal approach. We start by displaying some results demonstrating the ability of the proposed method for removing uniform shadows (Section 4.1). Then results of our non-uniform shadow removal method (Section 4.2) are shown. Results demonstrating the benefit of using the shadow-free region enhancement process described in Chapter 5 are also given, along with a comparison of our method with two gradient-based methods. 7.1 Uniform Shadow results Several real shadow images and their resulting shadow-free images produced by the algorithm described in Section 4.1 are shown in Figure 7.1. Figures 7.1a-b contain images of shadows cast on curved surfaces. It can be seen in the corresponding results that the shadows are completely removed and that photographic-quality results are obtained. Despite the fact that shadows in this images have wide penumbra, the correct scale factors in these example images are determined, and furthermore, the textural information is preserved. The results in Figures 7.1c-d show additional examples demonstrating the 57

67 ability of our algorithm to remove shadows while preserving textural information in penumbra regions. This can be further appreciated by examining the shadow-free text image in Figure 7.1c, where the text remains visually intact despite the significant width of the penumbra. 7.2 Non-uniform Shadow results In this section we demonstrate the ability of the algorithm described in Section 4.2 to remove non-uniform shadows. Figure 7.2 contains a non-uniform shadow cast on flat surface, exhibiting little texture. A high-quality shadowfree result is obtained using the proposed method. Another example of nonuniform shadow cast on flat surface is given in Figure 7.3. The surface in this example exhibits high-frequency texture. Figure 4.5 (top image) contains a shadow-free result of this image obtained assuming a uniform shadow, i.e. using a global scale factor in the umbra. It can be seen in the bottom of the image that the shadow is not removed completely. The result of our proposed method is given in Figure 7.3b. Two more examples of shadows cast on flat surfaces are given in Figures 7.4 and 7.5. Figure 7.4 contains non-uniform shadow with soft regions (the basketball net shadow). Figure 7.5 contains complex non-uniform shadow with soft regions, especially in the right part of the image. These examples demonstrate the ability of our proposed method to handle complex non-uniform shadows with soft shadow regions. Next we examine the ability of our method to handle non-uniform shadows cast on curved surfaces. Figures 7.6 and 7.7 contain non-uniform shadows cast on curved surfaces. Note that the shadow in Figure 7.6 has wide penumbra and that the shadow in Figure 7.7 exhibits high non-uniformity (see bottom row of Figure 4.5). Nevertheless, due to the high-order model proposed in our method, the correct geometry of the curved surfaces is obtained, yielding high-quality results. 58

68 (a) (c) (b) (d) (e) Figure 7.1: Experimental results of the method described in Section 4.1. (a-b) Shadow images of curved surfaces. (c-e) Shadow images of highly textured surfaces. 59

69 In Figure 7.8 we show that the proposed method can be used to remove highlight regions in images. Despite the fact that the highlight region exhibits high non-uniformity, the proposed method can be used without any changes to recover a highlight-free image. 7.3 Enhancement results and Comparison Figures 7.9 and 7.10 further emphasize the benefit of using the shadowfree region enhancement suggested in Chapter 5. Figure 7.9a contains a non-uniform shadow cast on textured surface. The shadow-free region in Figure 7.9b has high-contrast, as illustrated in Figure 2.9. A more pleasing result is obtained using the proposed enhancement process, as can be seen in Figure 7.9c. Figure 7.10 demonstrates the ability of the enhancement process to compensate for lack of self-shading. It can be seen in the shadow-free image of Figure 7.10b that the stone texture in the shadow-free region appears flat since self-shading is absent. The shadow-free enhancement process can yield a more natural shadow-free image as demonstrated in Figure 7.10c. In Figures 7.11 and 7.12 we compare our method to the ones suggested Figure 7.2: Shadow removal using the method suggested in Section

70 (a) (b) Figure 7.3: Shadow removal from textured surface. Compare with Figure 4.5. Figure 7.4: Removing nonuniform shadow cast on flat surface. Note the non-uniform shadows of the player and the basketball net. Figure 7.5: Shadow removal example. Complex non-uniform shadow with soft shadow regions. 61

71 Figure 7.6: Removing non-uniform shadow cast on curved surface. Note the wide penumbra. Figure 7.7: Removing non-uniform shadow cast on curved surface. Compare with Figure 4.5. Figure 7.8: Highlight removal example. Highlight removed using our proposed method. 62

72 in [52] and in [35], respectively. Since the methods of [52] and [35] operate in the gradient domain and a global Poisson equation is solved during the reconstruction phase, the color balance and global smoothness of the reconstructed image is affected, as can be seen in Figures 7.11b and 7.12b (taken from [52] and [35], respectively). Figures 7.11c and 7.12c display the results of our proposed algorithm, using the shadow-free region enhancement process while preserving the original mean values in shadow-free patches. Results of applying the enhancement process without preserving the mean values of shadow patches are shown in Figures 7.11d and 7.12d. It can be seen that since in these images the shadows are cast on flat surfaces, applying the enhancement process without preserving the original mean values does not modify the perceived geometry of the surface, and better results are obtained than with mean value preservation. 63

73 (a) (b) (c) Figure 7.9: Shadow-free region enhancement example: (a) Non-uniform shadow cast on textured surface. (b) Shadow-free image. (c) Shadow-free image following post-processing enhancement. (a) (b) (c) Figure 7.10: Shadow-free region enhancement example: Compensating for self-shading in shadow-free regions. (a) Shadow image. (b) Shadow-free image in which the stones in the shadow-free region appear flat due to absence of self-shading. (c) Applying the shadow-free region enhancement process produces a more natural shadow-free image. 64

74 (a) (b) (c) (d) Figure 7.11: Comparison of the proposed method with the method of [52]. (a) and (b) Shadow and shadow-free images taken from [52]. (c) Shadow-free image produced by the proposed method, without changing mean values in the enhancement process. (d) Applying the postprocessing enhancement when mean values of shadow-free patches are allowed to change. (a) (b) (c) (d) Figure 7.12: Comparison of the proposed method with the method in [35]. (a) and (b) Shadow and shadow-free images taken from [35]. (c) Shadow-free image produced by the proposed method, without changing mean values in the enhancement process. (d) Applying the postprocessing enhancement when mean values of shadow-free patches are allowed to change. 65

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

Shadow detection and removal from a single image

Shadow detection and removal from a single image Shadow detection and removal from a single image Corina BLAJOVICI, Babes-Bolyai University, Romania Peter Jozsef KISS, University of Pannonia, Hungary Zoltan BONUS, Obuda University, Hungary Laszlo VARGA,

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

Paired Region Approach based Shadow Detection and Removal

Paired Region Approach based Shadow Detection and Removal Paired Region Approach based Shadow Detection and Removal 1 Ms.Vrushali V. Jadhav, 2 Prof. Shailaja B. Jadhav 1 ME Student, 2 Professor 1 Computer Department, 1 Marathwada Mitra Mandal s College of Engineering,

More information

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES Karl W. Ulmer and John P. Basart Center for Nondestructive Evaluation Department of Electrical and Computer Engineering Iowa State University

More information

Computational Photography and Video: Intrinsic Images. Prof. Marc Pollefeys Dr. Gabriel Brostow

Computational Photography and Video: Intrinsic Images. Prof. Marc Pollefeys Dr. Gabriel Brostow Computational Photography and Video: Intrinsic Images Prof. Marc Pollefeys Dr. Gabriel Brostow Last Week Schedule Computational Photography and Video Exercises 18 Feb Introduction to Computational Photography

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Why study Computer Vision?

Why study Computer Vision? Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications building representations of the 3D world from pictures automated surveillance (who s doing what)

More information

INTRODUCTION TO IMAGE PROCESSING (COMPUTER VISION)

INTRODUCTION TO IMAGE PROCESSING (COMPUTER VISION) INTRODUCTION TO IMAGE PROCESSING (COMPUTER VISION) Revision: 1.4, dated: November 10, 2005 Tomáš Svoboda Czech Technical University, Faculty of Electrical Engineering Center for Machine Perception, Prague,

More information

5. Feature Extraction from Images

5. Feature Extraction from Images 5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i

More information

Rotation Invariant Image Registration using Robust Shape Matching

Rotation Invariant Image Registration using Robust Shape Matching International Journal of Electronic and Electrical Engineering. ISSN 0974-2174 Volume 7, Number 2 (2014), pp. 125-132 International Research Publication House http://www.irphouse.com Rotation Invariant

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image [6] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image Matching Methods, Video and Signal Based Surveillance, 6. AVSS

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Color Space Invariance for Various Edge Types in Simple Images. Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003

Color Space Invariance for Various Edge Types in Simple Images. Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003 Color Space Invariance for Various Edge Types in Simple Images Geoffrey Hollinger and Dr. Bruce Maxwell Swarthmore College Summer 2003 Abstract This paper describes a study done to determine the color

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

1.Some Basic Gray Level Transformations

1.Some Basic Gray Level Transformations 1.Some Basic Gray Level Transformations We begin the study of image enhancement techniques by discussing gray-level transformation functions.these are among the simplest of all image enhancement techniques.the

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, www.ijcea.com ISSN 2321-3469 SURVEY ON OBJECT TRACKING IN REAL TIME EMBEDDED SYSTEM USING IMAGE PROCESSING

More information

Motivation. Gray Levels

Motivation. Gray Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

Applying Catastrophe Theory to Image Segmentation

Applying Catastrophe Theory to Image Segmentation Applying Catastrophe Theory to Image Segmentation Mohamad Raad, Majd Ghareeb, Ali Bazzi Department of computer and communications engineering Lebanese International University Beirut, Lebanon Abstract

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

2D image segmentation based on spatial coherence

2D image segmentation based on spatial coherence 2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Fingerprint Classification Using Orientation Field Flow Curves

Fingerprint Classification Using Orientation Field Flow Curves Fingerprint Classification Using Orientation Field Flow Curves Sarat C. Dass Michigan State University sdass@msu.edu Anil K. Jain Michigan State University ain@msu.edu Abstract Manual fingerprint classification

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

Overcompressing JPEG images with Evolution Algorithms

Overcompressing JPEG images with Evolution Algorithms Author manuscript, published in "EvoIASP2007, Valencia : Spain (2007)" Overcompressing JPEG images with Evolution Algorithms Jacques Lévy Véhel 1, Franklin Mendivil 2 and Evelyne Lutton 1 1 Inria, Complex

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

Notes 9: Optical Flow

Notes 9: Optical Flow Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Texture Segmentation by Windowed Projection

Texture Segmentation by Windowed Projection Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw

More information

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force CHAPTER 4 Numerical Models This chapter presents the development of numerical models for sandwich beams/plates subjected to four-point bending and the hydromat test system. Detailed descriptions of the

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL?

HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? HOW USEFUL ARE COLOUR INVARIANTS FOR IMAGE RETRIEVAL? Gerald Schaefer School of Computing and Technology Nottingham Trent University Nottingham, U.K. Gerald.Schaefer@ntu.ac.uk Abstract Keywords: The images

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Digital Makeup Face Generation

Digital Makeup Face Generation Digital Makeup Face Generation Wut Yee Oo Mechanical Engineering Stanford University wutyee@stanford.edu Abstract Make up applications offer photoshop tools to get users inputs in generating a make up

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Image Processing. Filtering. Slide 1

Image Processing. Filtering. Slide 1 Image Processing Filtering Slide 1 Preliminary Image generation Original Noise Image restoration Result Slide 2 Preliminary Classic application: denoising However: Denoising is much more than a simple

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Supervised Sementation: Pixel Classification

Supervised Sementation: Pixel Classification Supervised Sementation: Pixel Classification Example: A Classification Problem Categorize images of fish say, Atlantic salmon vs. Pacific salmon Use features such as length, width, lightness, fin shape

More information

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 59 CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 3.1 INTRODUCTION Detecting human faces automatically is becoming a very important task in many applications, such as security access control systems or contentbased

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

A Simple Vision System

A Simple Vision System Chapter 1 A Simple Vision System 1.1 Introduction In 1966, Seymour Papert wrote a proposal for building a vision system as a summer project [4]. The abstract of the proposal starts stating a simple goal:

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

Isophote-Based Interpolation

Isophote-Based Interpolation Isophote-Based Interpolation Bryan S. Morse and Duane Schwartzwald Department of Computer Science, Brigham Young University 3361 TMCB, Provo, UT 84602 {morse,duane}@cs.byu.edu Abstract Standard methods

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 4 Digital Image Fundamentals - II ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation Outline

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Comment on Numerical shape from shading and occluding boundaries

Comment on Numerical shape from shading and occluding boundaries Artificial Intelligence 59 (1993) 89-94 Elsevier 89 ARTINT 1001 Comment on Numerical shape from shading and occluding boundaries K. Ikeuchi School of Compurer Science. Carnegie Mellon dniversity. Pirrsburgh.

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

DECOMPOSING and editing the illumination of a photograph

DECOMPOSING and editing the illumination of a photograph IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017 1 Illumination Decomposition for Photograph with Multiple Light Sources Ling Zhang, Qingan Yan, Zheng Liu, Hua Zou, and Chunxia Xiao, Member, IEEE Abstract Illumination

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

Lecture 4 Image Enhancement in Spatial Domain

Lecture 4 Image Enhancement in Spatial Domain Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain

More information

Effects Of Shadow On Canny Edge Detection through a camera

Effects Of Shadow On Canny Edge Detection through a camera 1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow

More information

Spam Filtering Using Visual Features

Spam Filtering Using Visual Features Spam Filtering Using Visual Features Sirnam Swetha Computer Science Engineering sirnam.swetha@research.iiit.ac.in Sharvani Chandu Electronics and Communication Engineering sharvani.chandu@students.iiit.ac.in

More information

Sampling and Reconstruction

Sampling and Reconstruction Sampling and Reconstruction Sampling and Reconstruction Sampling and Spatial Resolution Spatial Aliasing Problem: Spatial aliasing is insufficient sampling of data along the space axis, which occurs because

More information

Novel Lossy Compression Algorithms with Stacked Autoencoders

Novel Lossy Compression Algorithms with Stacked Autoencoders Novel Lossy Compression Algorithms with Stacked Autoencoders Anand Atreya and Daniel O Shea {aatreya, djoshea}@stanford.edu 11 December 2009 1. Introduction 1.1. Lossy compression Lossy compression is

More information

Level lines based disocclusion

Level lines based disocclusion Level lines based disocclusion Simon Masnou Jean-Michel Morel CEREMADE CMLA Université Paris-IX Dauphine Ecole Normale Supérieure de Cachan 75775 Paris Cedex 16, France 94235 Cachan Cedex, France Abstract

More information