Inpainting Problem Based on Evolutionary Algorithms

Size: px
Start display at page:

Download "Inpainting Problem Based on Evolutionary Algorithms"

Transcription

1 Ministry of Higher Education & Scientific Research University of Baghdad College of Science Department of Computer Science Inpainting Problem Based on Evolutionary Algorithms A Thesis Submitted to the Department of Computer Science, College of Science, University of Baghdad in Partial fulfillment of the requirements for the degree of Master of Science in Computer Science BY ZAYNEB RAID AHMED AL_RUBAIE B.Sc University of Baghdad SUPERVISED BY Dr. MAYADA FAISL ABDUL HALIM November Shawal

2 Supervisor Certification I certify that this thesis is prepared under my supervision at the Department of Computer Science, College of Science at the University of Baghdad by Zayneb Raid in partial fulfillment of the requirement for the degree of Master of Science in Computer Science. Signature: Name : Dr. Mayada F. Abdul Halim Title : Assistant Professor Date : / /2005 Certification of the Head of Department In view of the available recommendation I forward this thesis for debate by the examination committee. Signature: Name : Makia K. Hamad Title : Assistant Professor Date : / /2005

3 Examining Committee Certification We certify that we have read this thesis and as an examining committee, examined the student in its content and what is related to it, and that in our opinion it meets the standard of s thesis for the degree of Master Science in Computer Science. Signature: Name : Dr. Saleh Mahdi Ali Title : Professor (Chief) Date : / /2005 Signature: Name : Dr. Loay K. Abood Title : Assistant Professor (Member) Date : / /2005 Signature: Name : Dr.Sawsan Kamal Al_Ani Title : Lecturer (Member) Date : / /2005 Approved by the Council of the College of Science Signature: Name : Prof. A.M. Taleb Title : The Dean of College of Science University of Baghdad Date : / /2005

4 Acknowledgement I indebted to ALLAH for keeping faith that I can achieve this project successfully. I am grateful to Dr. Mayada Faisl Abdul halim my supervisor, for her continuous support and for giving me her care through out the period of research. I wish to thank A.P. Makia K. Hamad the head of department of computer science, all the staff and employees who have always been willing to provide assistance, giving advice, and be friendly. I wish to thank all my friends who always encourage me and press a smile whenever I felt depressed. Above all, my deepest gratitude and love to my family, especially to my father & mother, who have made many sacrifices for me to accomplish this work. Also especial thanks to my grandmother for her prayer to me. Zayneb

5 Table of Contents Chapter 1: An Overview 1.1 introduction The Fundamentals of Inpainting The Fundamentals of Digital Inpainting Motivation Thesis Layout 11 Chapter 2: Disocclusion and Constrained Texture Synthesis 2.1 Introduction Literature Survey Disocclusion and Digital Inpainting Approaches Texture Synthesis Approaches Combined Inpainting and Texture Synthesis Approaches 25 Chapter 3: Evolutionary Algorithm for Filling in Missing Parts 3.1 Introduction Evolutionary Algorithm: An Overview Outline of the Proposed Evolutionary Algorithm Representation (Definition of individuals) Evaluation Function Population and Initialization Parent Selection Mechanism Asexual Reproduction Operator Replacement Mechanism Termination Criteria 45

6 Chapter 4: Results 4.1 Introduction Asexual Evolutionary Inpainting Software The Selection of the Region Applying Asexual Inpainting Evolutionary Algorithm Test Cases Removing Small Arbitrary Regions Removing Large Arbitrary Regions More Results 74 Chapter 5: Conclusions and Future Work 5.1 Conclusion Future Work 84 References 85

7 Dedication To My Beloved Father To The Light in The Darkness... My Mother To My Dearest Grandmother To All My Family Zayneb

8 Abstract Inpainting is the technique of modifying an image in an undetectable visually form, is as ancient as art itself. Digital inpainting performs inpainting digitally through image processing in some sense. The goals and applications of digital inpainting are numerous, range from the restoration of damaged paintings and photographs to removal/replacement of selected objects. In this thesis, a new algorithm was introduced for solving inpainting problem. The present algorithm utilizes the search capabilities of evolutionary algorithms (EAs) for finding the appropriate pixels to inpaint large as well as small selected regions. The main search operator in the proposed algorithm is asexual reproduction operator that perturbs evolutionary individuals and offers diversity in the population. After selecting the region to be inpainted manually, the algorithm automatically fills-in the selected region from the promising EA pixels (those located around the missing region). The proposed evolutionary algorithm is used iteratively to fill-in the selected region in raster scan order (from top to bottom, left to right) Many experiments have been performed to test the applicability of the proposed evolutionary algorithm. The results are compared with several current state-of-the-art inpainting algorithms. Among 29 images (of small or large removal/replacement regions) used for the filling in selected regions, subjectively results proved that 25 images are reconstructed successfully. In other words, the success rate was 86%.

9 CHAPTER 1- AN OVERVIEW Chapter 1 An Overview 1.1 Introduction Today, digital photo cameras have established themselves on both consumer and professional markets. Apart from the immediate availability of photos for viewing and/or electronic transfer to an editorial office, digital cameras have the big advantage of producing electronic images that can easily be stored and copied without loss of quality for the next decades to come. Although these advantages may sound great, one has to consider that the number of analog camera sales worldwide each year is still a multiple of the number of corresponding digital camera sales: the quality and resolution of analog images is still hard to achieve even for high-end (and high-price) digital cameras [1]. As a result, the amount of analog images that have to be digitized in order to live forever is still growing. In addition, many photographs from the predigital era still need to be digitized to prevent them from decay. Unfortunately, this material often exhibits defects such as scratches or blotches. Equally disturbing artefacts are, for instance, subtitles, logos, and physical objects such as wires and microphones, which should be removed from the image. A region of an image is defined as a sub-image of an image. The process of manipulating an image or a region of an image, in some sense, refers to image retouching [2]. Image retouching ranges from the 1

10 CHAPTER 1- AN OVERVIEW restoration of paintings, scratched photographs or films to the removal or replacement of arbitrary objects in images. Retouching can furthermore be used to create special effects (e.g., in movies). Ultimately, retouching should be carried out in such a way that when viewing the end-result it is for an arbitrary observer impossible, or at least very hard, to determine that the image has been manipulated or altered. Artists and conservators use the analogous term inpainting. Inpainting is a term used in the conservation of paintings or objects for the toning or imitative matching of an area of paint loss, without obscuring any original paint [3]. Figure (1.1) shows the infamous inpainting example: The Commissar Vanishes Nikolai Yezhov, the man standing next to Stalin to the right, was conveniently removed from the photograph after being shot in The manual inpainting presumably performed by Stalin s marionettes. (a) (b) Figure1.1: Stalin with and without Nikolai Yezhov [4] (a) Before manual inpainting (b) After manual inpainting Inpainting can also be done digitally by using image-editing software. Virtually all inpainting cases require some kind of interaction by the user. Often the user needs to put down much effort in order to get a good result. 2

11 CHAPTER 1- AN OVERVIEW Digital Inpainting is a term introduced by Bertalmio et al. in 2000 [5]. It alludes to how to perform inpainting digitally through image processing in some sense. Thereby also automating the process and reducing the interaction required by the user. Ultimately, the only interaction required by the user is the selection of the region of the image to be inpainted. Figure (1.2) shows a digitized scratched photograph before and after digital inpainting. The digital inpainting was performed using the algorithm of Bertalmio et al. [5]. (a) (b) Figure1.2: A digitized scratched photograph [5] (a) Before digital inpainting (b) After digital inpainting 1.2 The Fundamentals of Inpainting The manual work of inpainting is most often a very time consuming process. The amount of inpainting approaches is most likely equal to the amount of inpainting artists [5]. The process has been applied (presumably) since the beginning of time, or at the least since the first deteriorated image and the notion of restoring appeared [3]. Bertalmio et al., the authors of, Image Inpainting, have been in contact with professional inpainting artists and it has been confirmed that inpainting is a very subjective procedure, there is no such thing as the way to solve the problem. But the basic process of inpainting is as follows: 3

12 CHAPTER 1- AN OVERVIEW 1. The global picture determines how to fill in the gap, the purpose of inpainting being to restore the unity of the work. 2. The structure of the surroundings of the gap is continued into the gap, contour lines are drawn via the prolongation of those arriving at the boundary of the gap. 3. The different regions inside the gap, as defined by the contour lines are filled with colors, matching those of the boundary of the gap. 4. The small details are painted (e.g., little white spots on an otherwise uniformly blue sky): in other words texture is added. The human brain in cooperation with the human eye is in some sense able to fill in gaps and remove occlusions in all kinds of images [3]. For example, imagine a field of view covering a natural scene consisting of a red cottage with white cottage corners and some trees and bushes or other objects in the foreground partly covering the cottage. Despite of the fact that it is not possible to see the entire cottage, one is usually sure that the cottage really is a complete cottage and is not only made up of the parts that are visible. This process is referred to as a modal completion [6][7]. A modal completion means that the human brain is in some sense performing interpolation of the missing parts. This interpolation is done according to some geometric conditions [3]. When observing Figure (1.3), most likely two black squares lying below or behind the occluding white parts are seen. Figure (1.4) shows another example of a modal completion. 4

13 CHAPTER 1- AN OVERVIEW Figure 1.3: Disocclusion via amodal completion [6] Figure 1.4: The Kanizsa square [8] Clearly, inpainting and the removal of occlusions depends on how the boundaries and the edges of the objects are prolongated. Restored edges needs to be as smooth and straight as possible and restored shapes as convex as possible [6]. Most of the information in an image can be classified into two categories: structure and texture [9]. Ideally a purely structure image would consist of disjoint sets of constant intensity. Here the important information on the sets are their shapes, i.e., the edges of the image. Beside edge information the human visual system is also sensitive to texture. Texture is an intuitive concept. Every child knows that leopards have spots but tigers have stripes, that curly hair looks different from straight hair, etc. In all these examples there are variations of intensity and color which form certain repeated patterns called texels [10]. The patterns can be the result of physical surface properties such as roughness or oriented strands which often have a tactile quality, or they could be the result of reflectance differences such as the color on a surface. Even though the concept of texture is intuitive (texture was 5

14 CHAPTER 1- AN OVERVIEW recognized when seen), a precise definition of texture has proven difficult to formulate [10][11]. Despite of the lack of a universally accepted definition of texture, all researchers agree on two points: (1) within a texture there is significant variation in intensity levels between nearby pixels; that is, at the limit of resolution, there is non-homogeneity and (2) texture is a homogeneous property at some spatial scale larger than the resolution of the image [10]. In other words, texture can be defined as an image that exhibits two properties: locality and stationarity [12]. Here, locality means that individual pixels related only to small set of neighbors while stationarity means that different regions look similar. Figure (1.5) gives how textures differ from general images. To clarify the difference between texture and general image, a movable windows with two different positions is drawn as black squares in Figure (1.5) (a) and (b). As shown the different regions of a texture are always perceived to be similar, which is not the case for a general image. In addition, each pixel in Figure (1.5) (a) is only related to a small set of neighboring pixels. 6

15 CHAPTER 1- AN OVERVIEW (a) (b) (a1) (a2) (b1) (b2) (c) (d) Figure 1.5: General image vs. Texture [12] (a) is a texture. (b) is a general image (a1, a2) two different portions in (a) (b1, b2) two different portions in (b) (c) texture with a1 and a2 portions swapped (d) image with b1 and b2 portions swapped Textures are in general divided into two categories: deterministic and stochastic [13]. A deterministic texture is characterized by a set of texels and a placement rule. For example, a brick wall texture is generated by tiling up bricks (texels) in a layered fashion (the placement rule). A stochastic texture (e.g., a sand beach, granite, bark, stone, ground, water, wood) on the other hand, does not have easily identifiable texels. Many real-world textures are semi-structure as they have some mixture of these two characteristics (e.g., woven fabric, wood grain, and plowed fields). See Figure (1.6). 7

16 CHAPTER 1- AN OVERVIEW (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 1.6: Types of textures [28] (a), (b) and (c) are structure textures, (d), (e) and (f) are stochastic (g), (h) and (i) are semi-structure textures. 1.3 The Fundamentals of Digital Inpainting Digital inpainting refers, as already mentioned, to inpainting through some sort of image processing. The digital inpainting process can be looked upon as a linear or non-linear transformation [14][15][2], let I 0 be the original image and I be the transformed image (i.e., the digitally inpainted image). 8

17 CHAPTER 1- AN OVERVIEW This is also how the concept of image processing is described in general [16][2]. The image processor can be looked upon as a function f as follows: f : I fi I (1.1) 0 that is I = f I ) (1.2) ( 0 Now, let Ω denote the set of pixels (the region) of the image I 0 to be inpainted. Let Ω denote the pixels surrounding Ω as shown in Figure (1.8) so that : W I, Ω= {the set of pixels of I 0 to be inpainted} and W I, Ω = {the boundary pixels of Ω}. I 0 Figure 1.7: The image I 0, the region Ω to be inpainted and its Ω Note that the case W I 0 is in theory possible. However, the practical solution for this case only can be considered as trivial. For how would one inpaint a complete image without any information whatsoever regarding its overall contents? Inpainting Ω with an arbitrary color or with a random noise pattern are two examples of trivial solutions. Thus, further discussion about this case is not needed [3]. The following concise pseudo-code describes the general solution to the problem [3]: 9

18 CHAPTER 1- AN OVERVIEW 1. SPECIFY Ω 2. Ω = THE BOUNDARY OF Ω 3. INITIALIZE Ω 4. FOR ALL PIXELS Ω 5. INPAINT P(X, Y) IN Ω BASED ON INFORMATION IN Ω The pseudo-code explanation is as follows: first lets the user specify the region to be inpainted. Second, computes the boundary of the region. Third, initializes the region by, for example, clearing existing color information. Finally, the for-loop simply inpaints the region based on information of its surroundings. Mainly three categories of works can be found in the literature related to digital inpainting to filling in missing parts from images [5]. The first one deals with the restoration of films, the second one is related to disocclusion, and the third one is related to constrained texture synthesis. In this thesis we deal with only disocclusion and constrained texture synthesis related works. 1.4 Motivation In this thesis two subjects are combined: evolutionary algorithms and digital inpainting. Our primary goal is to learn how can successfully apply evolutionary algorithms to solve the digital inpainting problem. The research project of this thesis is to test the newly developed evolutionary algorithm on the digital inpainting problem. A series of experiments have been performed in attempt to answer the following questions: - What are the aspects of an evolutionary algorithm in order to solve digital inpainting problems? - How effective is the proposed evolutionary algorithm compared to other algorithms and previous results in solving digital inpainting problems? 10

19 CHAPTER 1- AN OVERVIEW 1.5 Thesis Layout The rest of this thesis is organized as follows. Chapter 2 gives an overview of texture synthesis, with emphasize on disocclusion and constrained texture synthesis work. Chapter 3 illustrates the design steps of the proposed evolutionary algorithm. Experimental results are presented in Chapter 4, which tests the performance of the suggested evolutionary algorithm in filling missing and selected region. Comparisons are made between the proposed evolutionary algorithm and many different inpainting algorithms. Finally, Chapter 5 draws conclusions and suggests avenues for further researchers. 11

20 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS Chapter 2 Disocclusion and Constrained Texture Synthesis 2.1 Introduction Throughout the last decade, an increasing trend of traditional and non traditional image-processing tasks being performed by automated computer systems rather than manually by human beings. One such area which has gained focus during the last few years is the development of automated techniques for filling in missing parts of an image. The manual equivalents of such techniques have been well-known to artists for a long time, and have successfully been used to restore missing parts of paintings and photographs, often with hardly any noticeable artefacts [17]. In the past, the digital inpainting problem has been addressed by two classes of algorithms: (1) digital inpainting techniques for filling in small image gaps, and (2) texture synthesis algorithms for generating large image regions from sample textures. The former focus on linear structures which can be thought of as one-dimensional patterns, such as lines and object contours [18]; the latter work well for textures - repeating two-dimensional patterns with some stochasticity. Recently, researchers try to combine the advantages of these two approaches to present an efficient algorithm that can structure missing region as well as texture missing region. In the following subsections related work to these approaches are presented. 12

21 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS 2.2 Literature Survey Inpainting can be manipulated in three different approaches: disocclusion and digital inpainting algorithms, texture synthesis algorithms and the combined inpainting and texture synthesis algorithms. These three approaches are presented in the following subsections Disocclusion and Digital Inpainting Approaches Image inpainting techniques are, in a way, complementary to texture synthesis. Pioneered by Bertalmio et al. [5], approaches have been presented that propagate information from the surroundings of masked areas into their interior. Unlike texture synthesis, image inpainting handles color/intensity gradients correctly, but fails to reconstruct areas that should contain textures with fine detail [1]. Disocclusion the removal of occluding objects in digital images is a closely related area. Digital inpainting is in fact analogous to disocclusion since Ω may be observed as an occluding object [19]. The following paragraphs review some digital inpainting algorithms. In 2000 and 2001, Bertalmio et al. [5][19] describe two methods for solving digital inpainting problem. Their succeeding method is virtually the same as their first one. The differences are in the underlying mathematics that are used [3]. The basic idea of the pioneering inpainting algorithm is as follows: For each isophote (level line) arriving at Ω, the method of inpainting is to connect the isophotes 1 having the same intensity and orientation. The connection is performed using geodesic curves (i.e., lines following the shortest possible path between two points), inferring that the connecting lines never intersect each other. A straightforward property of isophotes is that an even number of isophotes always arrives at Ω. By taking advantage of these 1 Isophotes are lines of equal gray values 13

22 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS two facts, the solution of the problem of matching and connecting the isophote pairs is quite trivial and dynamic programming may be used to compute the optimal set [6]. Each color channel (red, green and blue) of the image is treated as separate gray-level images. To estimate the variation of the intensity, a discrete two-dimensional Laplacian is used. This estimation is spread in the isophote direction in order to maintain smooth intensity changes. To conclude the method: In order to get a visually pleasing result it is important to propagate both the geometry (the gradient direction) and the photometry (the intensity) [3]. Olivierira et al. in 2001 presented a method that is specifically designed for inpainting relatively small regions [20]. When focusing on small regions simpler models can be used. Instead of relying on any specific mathematical geometrical theories, this method takes into account a constraint of the sampling theorem [21][16]. Advantage is taken of the fact that the sampling theorem limits the spatial frequency content that may be automatically restored [20]. Which in turn imposes that only an approximate solution of the inpainting problem is possible, i.e., an exact reconstruction of Ω is not possible to perform [3]. The inpainting is performed by repeatedly convolving Ω using isotropic diffusion (i.e., using the linear heat equation). The diffusion process propagates information from Ω into Ω. The number of iterations may be predefined or determined by looking at the value of each pixel belonging to Ω. The process is stopped if the changes of the value lie within a certain threshold [3]. When using the weighted sum of a pixel s neighborhood, noticeable blurring may be introduced. It is easy to realize that this implies that edges may be broken. The blurring is especially noticeable when the neighborhood or parts of the neighborhood of a pixel is made up of significant contrast 14

23 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS changes. To solve this problem, a diffusion barrier may be introduced to stop the algorithm from performing inpainting where the contrast changes are significant Texture Synthesis Approaches Texture synthesis has a variety of applications in computer vision, graphics, and image processing. Texture images usually come from scanned photographs, and the available photographs may be too small to cover the entire object surface. In this situation, a simple tiling will introduce unacceptable artefacts in the forms of visible repetition and seams. Texture synthesis solves this problem by generating textures of the desired sizes. Other applications of texture synthesis include various image processing tasks such as occlusion fill-in and image/video compression [22]. The problem of texture synthesis can be formulated as follows: let us define texture as some visual pattern on an infinite 2-D plane which, at some scale, has a stationary distribution. Given a finite sample from some texture image, the goal is to synthesize other samples from the same texture. Without additional assumptions this problem is clearly ill-posed since a given texture sample could have been drawn from an infinite number of different textures. The usual assumption is that the sample is large enough that it somehow captures the stationarity of the texture and that the (approximate) scale of the texture elements (i.e., texels) is known [23]. In general, there are two basic approaches for texture synthesis [24] [25] [26]: patch based texture synthesis and pixel based texture synthesis. Digital inpainting researchers utilize these two approaches. The following subsections generally clarify these approaches. 15

24 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS 1. Pixel based Texture Synthesis Some texture synthesis approaches generate the output texture pixel-by-pixel using Markov Random Fields theory [23]. The large texture is produced in scan-line order, where each pixel is set after comparing its neighborhood to all similar shaped neighborhoods in the sample texture, which is both stationary and local. This comparison leads to a distance function, which corresponds to the probability needed to choose the best fitting pixel (the most similar one) [26]. Formally, as in patch-based methods I 0 represents an image that is synthesized and I real is an infinite texture, from which pixels are sampled. Let P I 0 be a pixel and ( p) I 0 consists in estimating all sources of P in I real w be a neighborhood around P. The approach. This is done by considering the stochastic dependencies in the Markov Random Field (MRF) on the basis of comparing the pixel neighborhoods [24]. Size and shape of the neighborhood w( p) are the main parameters that determine the quality of the synthesized texture. The size should be on the scale of the largest regular structure that should be synthesized, to catch sufficiently the stochastic constraints of the texture [24]. The shape of the neighborhood can be causal or non-causal, see Figure (2.1) [11]. In the causal neighborhood (also known as L-shape neighborhood) the neighborhood can only contain those pixels preceding the current output pixel in the raster scan ordering. While in the non-causal neighborhood, the neighborhood contains those pixels surrounding the current output pixel. Two researches are used here to clarify the use of causal and non-causal neighborhood as follows. 16

25 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS w ( p) w ( p) p p (a) (b) Figure 2.1: Causality of the neighborhood shape [11] (a) Causal neighborhood. (b) Non-causal neighborhood. Efros and Leung [23] initialize the output texture with a 3 3 patch (seed), randomly taken from the input texture I 0. Processing is done in layers outward from the already synthesized pixels and/or from the seed. The neighborhood is modeled as a square window. To match the causality criterion, only already processed pixels within this window are considered for the distance calculation, see Figure (2.2). Figure 2.2: Pixel based synthesis according to Efros and Leung [23] (a) Pixel p with neighborhood w ( p) of size w e w e (in this example w e = 5), formed as square window. (b) The output image I is initialized with a seed of 3 3 pixels. Afterwards the synthesis is started. (c) The output texture is grown in layers from the seed. Another approach is used by Wei and Levoy [27]. First, the output image is totally initialized with white noise. Then the output image is processed in raster scan order (from top to bottom, left to right). For the processing, an L-shaped neighborhood is used, which ensures in general 17

26 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS causality, apart from the edge regions. Because of the initialization with white noise, the neighborhood contains in the beginning noise, which affects the randomness of the output texture. Edges are handled in the following manner: In I 0 only those neighborhoods w( p) are considered, which are completely inside I 0. To guarantee causality and tileability, the completely with noise initialized image I is expanded toroidally. Only the noise in the last two rows and columns is used, all other pixels are overwritten in the following synthesis process before they are used [24]. For clarity, unused noise pixels are painted black. See Figure (2.3). Figure 2.3: Pixel based synthesis according to Wei and Levoy [24]. (a) Pixel P with neighborhood w ( p), w e = 5; h e = 3. (b) Synthesizing a middle pixel. (c) Start of synthesis process. 18

27 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS 2. Patch based Texture Synthesis Currently patch-based approach is the most effective method to solve the problem of generating textures of desired size according to a given texture and synthesizing results that should be sufficiently different from the given sample texture but should be perceived by humans to be the same texture. The main idea of patch-based approach is to synthesis new texture by 1. Taking different smaller parts of a given texture, i.e., patches, randomly. 2. Globally arrange patches under certain constrains. 3. Tiling them together in a consistent way. Most of patch-based texture synthesis algorithms performing all of the three steps for one patch after the previous patch is settled [28]. In patch-based approach, the texture is modelled as a MRF, the brightness value of a pixel being highly correlated to the brightness values of its spatial neighbors. Let I 0 be an image that is synthesized from an infinite texture I real. Let further I 0 R be a square patch of pixels with the neighborhood (seam) R( R) I 0 modelled around R with width w e. Finally let a block B = ( R R( R)) I 0 with width w B as the combination of patch and seam. Figure (2.4) clarifies patch, seam, and block definitions [24]. w e R(R) R w B w B Figure 2.4: Patch based sampling. Block B of size w B wb consisting of patch R with surrounding neighborhood R(R) [24] 19

28 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS The patch size is the most critical parameter of the patch-based algorithm. The patch has to capture the statistical constraints of the input texture and transfer them to the output image. A smaller patch size means more randomness in the output image and vice versa. The patch size should be big enough to capture the biggest regular structure in the texture. But it should not be too big, so that interaction between these structures is left over to the algorithm [24], see Figure (2.5). Figure 2.5: Comparison of different patch sizes [24] (a) Input texture, grey scale 8 bit/pixel. Size of the characteristic bricks about (width height). (b) Synthesized image I, patch size (5 4): In general the texture structure is not reproduced correctly. (c) Synthesized image I, patch size (25 4): Horizontal structure is recognizable. (d) Synthesized image I, patch size (45 4): Horizontal and vertical structures are reproduced. (e) Synthesized image I, patch size (100 4): Structures are reproduced, but only little interaction between these structures is left over to the algorithm 20

29 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS The seam size also should be big enough to capture statistical constraints across patch boundaries. A large w e catches strong statistical constraints, which forces a natural transition of texture features across boundaries. The width of the seam is largely independent on other parameters and depending on the texture - good results can be reached with a very small seam (e.g. for very smooth textures). A large effect to the visual fidelity of the output image has also the blending, which must not be neglected. It is important not to make the seam too big, to avoid introduced errors and a loss of sharpness because of the blending, and to reduce computational efforts [24]. The patch-based algorithm uses texture patches of the input sample texture I 0 as the building blocks for constructing the synthesized texture I. In each step, a patch B k of the input sample texture I 0 is pasted into the synthesized texture I. To avoid mismatching features across patch boundaries, B k is carefully selected based on the patches already pasted in I, { 0 B. k -1 B }. The texture patches are pasted in the order shown in Figure (2.6) for simplicity; square patches of a prescribed size are used [22]. w e w e R R Figure 2.6: Arrangement of blocks in the output image. The blocks are copied with their neighbourhood in a manner that these overlap regions overlap [24] 21

30 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS 3. Constrained Texture Synthesis The basic recipe of any texture synthesis algorithm is as follows. For each output pixel perform an exhaustive search in order to find a best match of an adaptive neighborhood and replace the current output pixel with this best match. Usually a causal neighborhood is used. The ideas may be extended to inpaint a region Ω. This is referred to as constrained texture synthesis. The synthesized region needs to look like the surroundings and it needs to seamlessly blend with the existing texture at Ω (i.e., the boundary between the synthesized and the original regions must be invisible) [3]. In 1997, Igehy and Pereira presented a method based on the pyramidbased texture analysis/synthesis algorithm [29]. The inputs into the pyramidbased texture analysis/synthesis algorithm are a target texture and a noise image (e.g., white noise, fractal noise). By the end of the algorithm, noise will have been converted into a synthetic texture. First, Match-Histogram is used to force the intensity distribution of noise to match the intensity distribution of target. Then an analysis pyramid is constructed from the target texture. The pyramid representation can be chosen to capture features of various sizes (by using a Laplacian pyramid) as well as features of various orientations (by using a steerable pyramid). Then, the noise is modified in an iterative manner. At each iteration, a synthesis pyramid is constructed from noise, and Match- Histogram is done on each of the sub-bands of the synthesis and analysis pyramids. The modified synthesis pyramid is then collapsed back into noise, and a Match-Histogram is done between it and the target texture. After several iterations, the noise texture converges to have the same distribution of features as the target image; thus, a synthetic texture is created [29]. Igehy and Pereira method based on pyramid-based texture synthesis with the difference that the focus is on constrained texture synthesis. The algorithm is basically the same. However, in order to synthesize Ω a 22

31 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS composition step is added to the algorithm. The composition is done in such a way that blurring and aliasing are avoided [3]. In 2000, Wei presented another texture synthesis method [11]. Using an image containing white random noise, synthesize a new image based on the original texture. A raster scan ordering (i.e., top-bottom and left-right) is performed for all output pixels in the noisy image. The noise is only used when generating the first few rows and then it is ignored. The spatial neighborhood of each output pixel is compared to all possible neighborhoods in the original texture. The author draws an analogy with putting together a jigsaw puzzle where the pieces are the individual pixels and the fitness between these pieces is determined by the colors of the surrounding neighborhood pixels. The synthesized image appearance depends on the size and the shape of the neighborhood that is used. The neighborhood only contains the causal pixels with regard to the current output pixel. Single-resolution as well as multi-resolution may be used [3]. The method suffers from a few limitations. Global features such as perspective, lighting and shadow cannot be synthesized. However, this is typical to most texture synthesis algorithms since the use of a local neighborhood for each output pixel gives this result [3]. For the constrained texture synthesis approach the neighborhood needs to be non-causal and symmetric. Otherwise a visible border may appear to the right and bottom parts of Ω. The raster scan synthesizing is replaced with a spiral synthesizing, i.e., filling in from the outside to the inside [3]. Later in 2003 Drori, Cohen-Or and Yeshurun introduced a method that focusing on constrained texture synthesis. In the algorithm, Ω is represented by an inverse matte (as illustrated in Figure (2.7)) [30]. A smooth approximation of Ω is made. A confidence value is then calculated for the approximated pixels. The confidence value is based on each pixel s proximity 23

32 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS to the known pixels. Ω is then synthesized in a clear and intuitive manner by using the remaining parts of the image as a training set. Adaptive image fragments are put together in Ω using an iterative scheme of designs. The algorithm uses a multi-resolution image pyramid, iterating from coarse to fine scale. This is important in order to capture details at several scales. It is also computationally more efficient. A coarse scale corresponds to using a range of relatively large fragments. The size of these fragments gradually decreases at every scale and finer details may then be added. Each scale is up-sampled by using bi-cubic interpolation [30]. (a) (b) Figure 2.7: An image and its inverse matte [30] (a) The original image, the gray square being Ω (b) The inverse matte of the image in (a) A great disadvantage may be the computation time needed for the algorithm to run. The computation time is depending on the size of the image (quadratic in the number of pixels) [30]. In 2004, Zhang, Xiao and Shah proposed a method for fill in stains and undesired objects covering significant portions of the images [31]. The approach achieves completion in three steps. First, a spatial-range model is determined to establish the searching order of the target patch. Second, a source patch is selected by measuring the adjusted appearance of the source patch with the target patch and enforcing the searching area in the neighborhood around the previous source patch. Third, a graphcut patch updating algorithm is designed to ensure the non-blurring updating. 24

33 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS Combined Inpainting and Texture Synthesis Approaches A combination of texture synthesis and digital inpainting is perhaps the ultimate solution to get the best possible result for replacing Ω. The purpose is to simultaneously preserve both texture and structure. This section describes two approaches: Simultaneous Structure and Texture Image Inpainting and Object Removal by Exemplar-Based Inpainting. In 2003, Simultaneous Structure and Texture Image Inpainting approach is proposed by Bertalmio et al. [32]. The idea of this approach is to split the original image into two different functions containing different characteristics. The first function represents the underlying image structure while the second one contains information about texture (and noise). The image is splitted in such a way that the functions may be summed together in order to recreate the original image. For the function containing the underlying image structure an inpainting algorithm is used. A texture synthesis algorithm is used for the texture function. As the authors suggest any arbitrary inpainting algorithm or texture synthesis algorithm may be used. Also in 2003, Object Removal by Exemplar-Based Inpainting approach is proposed by Criminisi, Péres and Toyama[18]. This is another combined texture synthesis and inpainting algorithm. This kind of technique implies a computationally cheap and effective way of generating new texture. The texture generating is done by sampling and by copying color values from the remaining parts of the image. Exemplar-based texture synthesis is well suited for preserving both texture and structure; thereby a separate mechanism for handling the isophotes is not needed. The algorithm is iterative and fills Ω by defining Ω as the fill-front. The filling order is important. A best-first filling order based on every pixels priority value is used. The pixels are then filled in an order according to their priority values. The priority value is in turn based on a confidence term and a data term. The confidence term is updated at the end of each iteration. 25

34 CHAPTER 2- DISOCCLUSION AND CONSTARINED TEXTURE SYNTHESIS The data term takes into account the isophote direction for the current pixel. The current Ω is filled with the best matching samples. The best match is based on a Sum of Squared Differences (SSD) [18]. 26

35 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS Chapter 3 Evolutionary Algorithm for Filling in Missing Parts 3.1 Introduction In the 19th Century, many people found evolution hard to accept, because it required one to believe that a process as directionless, haphazard and random as evolution could produce structures as complex and subtle as eyes, ears and wings. Every part of every animal seemed not only perfectly designed, but also perfectly designed to work with each other [33]. These reasons find evolution interesting, but computer scientists have an additional one: it seems that all these are due to the mindless execution of a few simple instructions many times ago, which is exactly what computers are good for. For the simulation of evolution, computers seem perfect; if it is able to simulate the process of evolution in silico, then perhaps we can use computers to bring the problem- solving power of evolution to bear the problems of our own. Evolutionary Algorithms (also EAs, evolutionary computation, artificial evolution) simulate the process of evolution on a digital computer: for each aspect of evolution in nature the genome, reproduction, survival of the fittest, etc. EAs have their artificial analogue. In each EA there is, 27

36 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS a collection of structure representing individuals, a fitness function that determines the effectiveness of each individual, and functions that simulate the effects of reproduction and death [33]. Recently, EAs have been successfully applied to a variety of optimisation problems such as wire routing, scheduling, travelling salesman, image processing, engineering design, parameter fitting, computer game playing, knapsack problems, and transportation problems [34]. This chapter is concerned with presenting an evolutionary algorithm for inpainting problem. Before presenting the proposed evolutionary algorithm that are introduced to solve the inpainting problem an overview of evolutionary algorithms is described in the following section. 3.2 Evolutionary Algorithms: An Overview The common underlying idea behind all EA techniques is the same: given a population of individuals, the environmental pressure causes natural selection (survival of the fittest) and this causes a rise in the fitness of the population. Given a quality function to be maximized, randomly a set of candidate solutions were created, i.e., elements of the function s domains and apply the quality function as an abstract fitness measure-the higher the better. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and/or mutation to them. Recombination is an operator applied to two or more selected candidates (the so-called parents) and results one or more new candidates (the children). Mutation is applied to one candidate and result in one new candidate. Executing recombination and mutation leads to a set of new candidates (the offspring) that compete based on their fitness (and possibly age) with the old ones for a place in the next generation. This process can be iterated until 28

37 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS a candidate with sufficient quality (a solution) is found or a previously set computational limit is reached [35]. In this process, there are two fundamental forces that form the basis of evolutionary systems. Variation operators (recombination and mutation) create the necessary diversity and thereby facilitate novelty, while Selection acts as a force pushing quality. The combined application of variation and selection generally leads to improving fitness values in consecutive populations. It is easy (although some-misleading) to see such a process as if the evolution is optimizing, or at least approximating, by approaching optimal values closer and closer over its course. Alternatively, evolution is often seen as a process of adaptation. From this perspective, the fitness is not seen as an objective function to be optimized, but as an expression of environmental requirements. Matching these requirements more closely implies an increased viability, reflected in a higher number of offspring. The evolutionary process makes the population adapt to the environment better and better [35]. Note that many components of such an evolutionary process are stochastic. During selection fitter individuals have a higher chance to be selected than less fit ones, but typically even the weak individuals have a chance to become a parent or to survive. For recombination of individuals the choice of which pieces will be recombined is random. Similarly for mutation, the pieces that will be mutated within a candidate solution, and the new pieces replacing them, are chosen randomly. The general scheme of EA as a pseudo-code is given here and Figure (3.1) shows its diagram [35]. 29

38 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS BEGIN INITIALIZE population with random candidate solutions; EVALUATE each candidate; REPEAT UNTIL (TERMINATION CONDITION is satisfied) 1. SELECT parents; 2. RECOMBINE pairs of parents; 3. MUTATE the resulting offspring; 4. EVALUATE new candidate; 5. SELECT individuals for the next generation; DO END Initialization Parent selection Parents Population Recombination Mutation Termination Survivor selection Offspring Figure 3.1: The general scheme of an EA as a diagram [35] 30

39 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS Three models of EAs were independently (and almost simultaneously) developed. Their main differences come from the representation of individual, operators they use, and in general from the way they implement the three mentioned stages: selection, recombination, and mutation. Table (3.1) illustrates the three paradigms in EAs 1 [36]. Table 3.1 Paradigm in EAs Paradigm Created by Evolution Strategies (ESs) I. Rechenberg and H.-P. Schwefel Genetic Algorithms (GAs) J. H. Holand Evolutionary Programming (EPs) L. J. Fogel, A. J. Owens and M. J. Walsh ESs are often used to find optima of real-valued functions. It works mainly by applying changes on a candidate solution. The actual application of these changes is self-adaptive [37]. In nature, this is known as asexual reproduction. The first evolution strategies maintained only one candidate solution at the same time, but very soon variants emerged maintaining a pool of two or more solutions. John Holland independently developed GAs in the 1960s [38]. Like in evolution strategies random mutation is used and selection, but an important difference to previous approaches is that information from different individuals is combined using crossover or recombination operators. The biological analogue is sexual reproduction, and biological terminology is often used. Two parent strings are combined to form a new child string. Genetic Algorithms are often defined on binary strings, so in order to use standard crossover operators a binary encoding has to be found for each 1 Historically, the word evolutionary has been associated with algorithms that use selection and mutation alone, while the term genetic has been associated with algorithms that use selection, recombination and mutation. 31

40 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS problem. The operators only work on strings of single bits, not on more advanced structures like graphs or trees. Since there is interaction between candidate solutions by means of crossover, the population size and structure are very important in GAs, while they are less important in Evolution Strategies [37]. EP is described for the evolution of finite state machines to solve predirection tasks. The state transition tables of these machines were modified by uniform random mutations on the corresponding discrete, finite alphabet. Evaluation of fitness took place according to the number of symbols predicated correctly. Each machine in the parent population generated one offspring by means of mutation, and the best half number of parents and offspring were selected to survive [39]. 3.3 Outline of the Proposed Evolutionary Algorithm Evolutionary algorithm presented in this section can be used to restore missing parts on the image and also to remove undesired objects from the image. Hence, the region to be inpainted must be selected by the user, depending on his subjective selection. First, the user indicates the region, Ω, in an image, I 0, to be selected. This step creates a mask that covers the selected region completely. Figure (3.3) depicts this process. I ( x, ) 0 y represents any pixel in the input image, I 0 at coordinates x and y respectively. Let assume that P r x 1, y ) represents a pixel in the selected ( 1 region W at coordinates x 1 and y 1 while P f ( x 2, y2 ) represents a pixel at coordinates x 2 and y 2 in I 0, excluding W, i.e. ( Pf x, y ) I, Pf ( x, )ˇW ). ( y2 Moreover, let assume that there is a rectangular region Ω surrounds the masked region W with some preselected width B w. Three methods are used 32

41 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS for region selection: selection via rectangle, selection via freehand, and selection by Mspaint brush software. These methods are to be described in details in the next chapter. At this point, the only user involvement to the algorithm requirement is to mask the region to be inpainted. Here are concerned on how to fill-in the regions to be inpainted, once they have been selected. Therefore, the input to the proposed algorithm is a natural or synthetic image containing an area which is damaged or selected by a mask. The output is an image where the damaged or selected region is filled with synthesized texture based on the information surrounding the selected region Ω. The proposed evolutionary algorithm, as coined asexual evolutionary inpainting algorithm (AEIA), is used iteratively to fill-in the selected region in raster scan order (from top to bottom, left to right). Figure (3.3) shows the outline of the proposed evolutionary algorithm illustrated as pseudo-code. 33

42 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS (a) I 0 (0,0) Y Coordinate X Coordinate (b) I 0 ( h, w) Selected region W P r ( x 1, y1) Bw W P f ( x 2, y2 ) (c) Figure 3.2:The region selection process (a) the original image (b) region to be selected in close view (c) the mask image 34

43 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS Asexual Evolutionary Inpainting Algorithm INPUT: Image I 0 with masked region Ω OUTPUT: Inpainted image I BEGIN Store the pixels in Ω into an array FOR each pixel in the array INITIALIZE (determine the initial chromosomes) EVALUATE (compute the fitness value of the initial chromosomes) REPEAT UNTIL (TERMINATION CONDITION is satisfied) SELECT (select parents from the population) MUTATE (mutate the selected parents) EVALUATE (compute fitness value of the resulted offspring) REPLACE (survive the individuals for the next generation) DO Update the value of the current pixel inside the region to be filled with the fittest pixel from the last generation NEXT END Figure 3.3: The proposed evolutionary algorithm s pseudo-code 35

44 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS As in ES and EP, AEIA changes the candidate solutions in selfadaptive manner, i.e., only mutation is applied to all individuals. This reflects the name proposed for this type of evolutionary algorithm. A number of components, procedures and operators are used by AEIA. These components and operators are: 1- Representation (definition of individuals). 2- Evaluation function (or fitness function). 3- Population. 4- Parent selection mechanism. 5- Mutation operator. 6- Replacement (or Survivor selection mechanism) Each of these components is described in details in the following. Furthermore, to obtain a running algorithm the initialization procedure, replacement technique and a termination condition are also defined Representation (Definition of individuals) The first step in defining an EA is to link the real world to the EA world, that is to setup a bridge between the inpainting problem context and the problem solving space where evolution will take place. For each pixel within the region Ω has to be coloured with a suitable color taken from outside region Ω but inside the region Ω. The different possible colors can form possible solutions to that pixel, and one among these colors can be considered as the best one. These possible solutions form phenotypes, their encoding; the individuals within the EA form the genotypes. For each pixel P r ( x, y) in W, evolutionary processes are applied. Accordingly, the first design step is to specify a mapping from the 36

45 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS phenotypes onto a set of genotypes that are said to represent these phenotypes. Each target pixel ( x, y) in the region W has associated with it a P r causal L-shaped 12-neighborhood target pixels and treating them as a vector, L r, of 13 pixels as shown in Figure (3.4)(b). For the inpainting problem the chromosomes are to be selected from Ω region. The coding of the chromosome represents a P x, y ) and its L-shaped 12-neighborhood pixels f ( i i (genes). Each gene in the chromosome holds two integer values representing the coordinates of the corresponding pixel in the L-shape, together with Red, Green, and Blue colour components of that pixel. Figure (3.4)(e) depicts the vector representation L f of a chromosome. After performing AEIA operations, a solution- a good pixel Pf ( xi, yi ) from the image 0 I - is obtained by decoding the best genotype after termination. The obtained solution fills in the current target pixel P r ( x, y) from the selected region. 37

46 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS L-shaped 12- neighborhood pixels of ( x, y) P r L-shaped 12- neighborhood pixels of P x, y ) f ( i i p r ( x, y) P x, y ) f ( i i (b) (a) (c) L r x-2 y-2 R G B x-1 y R G B x y R G B (d) L f (i) x - 2 y - 2 R G B x -1 i i i y i R G B x i y i R G B (e) Figure 3.4: The chromosome representation (a) masked region Ω with its Ω (b) causal neighborhood of P r ( x, y) (c) causal neighborhood of P f ( x, y) (d) P vector, L r r (e) P vector, (i) f L f 38

47 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS Evaluation Function The role of the evaluation function is to represent the requirements to adapt to. It forms the basis for selection, and thereby it facilitates improvements. More accurately, it defines what improvement means. From the problem solving perspective, it represents the task to solve the evolutionary context. Technically, it is a function or procedure that assigns a quality measure to genotypes. The evaluation function is commonly called fitness function. This might cause a counterintuitive terminology if the original problem requires minimization for fitness. Fitness is usually associated with maximization. Mathematically, however, it is trivial to change minimization into maximization and vice versa. In inpainting problem, the objective is to fill in each target pixel P r ( x, y) in the selected region with a pixel from the Ω region that has minimum sum-of-squared differences (SSD) between pixels in the L-shaped 12-neighborhood. Formally speaking, the objective function of the i th chromosome can be computed by evaluating SSD between the target L- shaped vector and that chromosome L-shaped vector as follows: objective( i) = 12 ( (, ). ( ). ) 2 ( (, ). ( ). ) 2 L f i j R - Lr j R + L f i j G - Lr j G + ( L f j = 1.. (3.1) where ( i, j). B - ( j). B) 2 Lr ( i, j) L f is the j th pixel in the L-shaped neighborhood of P x, y ) ( j) f ( i i Lr is the j th pixel in the L-shaped neighborhood of the target ( x, y) R, G, B are red, green and blue channels The fitness function may then calculated as: P r 39

48 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS fitness i) = 0 if objective( i) = 0 1 objective( i) otherwise (. (3.2) Note that only one fitness value does not necessarily imply only one phenotype is present, and in turn only one phenotype does not necessarily imply only one genotype. The reverse is however not true: one genotype implies only one phenotype and fitness value Population and Initialization The role of the population is to hold (the representation of) possible solutions. A population is a multiset 2 of genotypes. The population forms the unit of evolution. Individuals are static objects not changing or adapting, it is the population that does. Given a representation, defining a population can be as simple as specifying how many individuals are in it, that is, setting the population size. In almost all EA applications the population size is constant, not changing during the evolutionary search. In AEIA, population size is constant also. It is restricted to twenty-five individuals. In most EA applications, the initialization is kept simple. The population of chromosomes is created randomly by generating the required number of individuals using a random number generator that uniformly distributes in the desired range. Other EAs have seeded the initial population with some individuals that are known to be in the surrounding area of the best solution. This approach is only applicable if the nature of the problem is well understood beforehand or if the EA is used in conjunction with knowledge based system. But either way a key idea is that the EA will search from population, not from a single point. 2 A multiset is a set where multiple copies of an element are possible [35] 40

49 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS In general, the population in AEIA is initialized with individuals from the surrounding area of the selected region, Ω, which may represent the promise solution. For each pixel P r ( x, y) in Ω the initial population is created randomly consisting of six groups of individuals selected from Ω region. The first and second groups of individuals are chosen from the left- and right- sides of a rectangular region passing the horizontal line with the pixel P r ( x, y),respectively. The third and fourth groups of individuals are chosen from top- and bottom- sides of a rectangular region passing the vertical line with P r ( x, y) to be inpainted. The reason for these four groups selection can be traced backed to the fact that these regions may contain promising solutions to the current pixel to be inpainted. The fifth group is chosen randomly from Ω. Depending on the boundary width, B w, which create a rectangle border surrounding Ω. The sixth group consists of four chromosomes created from the previously inpainted pixel chromosome. The original position of the previously inpainted pixel is shifted one pixel to the left, right, up and down to create these four offspring chromosomes. Figure (3.5) presents a clarification example of the initialization procedure. 41

50 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS P r Bw Left : first group Right : second group Up : third group Down : fourth group Random : fifth group Previous pixel : sixth group Figure 3.5: Initialization procedure 42

51 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS Parent Selection Mechanism The role of parent selection or mating selection is to distinguish among individuals based on their quality, in particular, to allow the better individuals to become parents of the next generation. An individual is a parent if it has been selected to undergo variation in order to create offspring. Together with the replacement mechanism, parent selection is responsible for pushing quality improvements. In AEIA tournament selection is used. Recent researches have shown that tournament selection to be the preferred method of evolutionary selection because it keeps a constant pressure on the population to improve itself [33]. In general, two individuals are chosen at random from the population and their fitness is compared. The best fitness value is selected. On average, the best individuals get two copies, the median individuals get one copy, and the worst individuals get no copy at all Asexual Reproduction Operator In EAs, the asexual reproduction operator is also known as mutation. Mutation is a unary variation operator. It is applied to one genotype and delivers a (slightly) modified mutant, the child or offspring of it. A mutation operator is always stochastic: its output- the child- depends on the outcomes of a series of random choices. Mutation operator in the proposed EA acts as in EP, it is one and only variation operator doing the whole search work. It is worth noting that mutation operator form the evolutionary implementation of the elementary steps within the search space. Generating a child amounts to stepping to a new point in this space. The simplest way to satisfy this condition is to allow the mutation operator to jump everywhere. The mutation in AEIA occurs for all selected parents. For each 43

52 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS parent, its L-shape chromosome is shifted to the up, down, left, and right according to a jump size, J s, which is created randomly. Then, the mutated individual creates four offspring. The fittest once compared with the parent individual is selected for surviving. Figure (3.6) depicts the mutation operation with J s equals to five. P ( x, y ) the parent individual, P '( x - 5, y ) shift up, P '( x, y - 5) shift left, P '( x + 5, y ) shift down, f i i and P '( x, y + 5) shift right. P r ( x, y) the pixel to be inpainted. f i i f i i f i f i i i P r ( x, y) P '( x - 5, y ) f i i P f '( x, y - 5) i i P f ( x, y ) i i P '( x + 5, y ) P '( x, y + 5) f i i f i i parent shift shift shift shift chromosome left right up down Figure 3.6: Mutation operation 44

53 CHAPTER 3- EVOLUTIONARY ALGORITHM FOR FILLING IN MISSING PARTS Replacement Mechanism Also known as survivor selection mechanism. The replacement mechanism is called after having created the offspring of the selected parents. As mentioned in section 3.3.4, in AEIA the population size is constant, thus a choice has to be made on which individuals will be allowed in the next generation. This decision is usually based on their fitness values, favoring those with higher quality. As opposed to parent selection which is typically stochastic, replacement is often deterministic, as we mentioned in mutation operation. In addition to the survived mutated offspring, the elitist strategy is used so that the best individual from the current population will be automatically survived to the next generation. Elitism makes the EAs retain the best individual at each generation. The best individual can be lost if it is not selected to produce or if it is destroyed by recombination or mutation [40] Termination Criterion If one does not know what the optimum value is, or if one is just looking for an acceptable solution, it is often hard to say how many rounds the algorithm should be allowed to run. In theory the population should converge to one solution and the algorithm should terminate exactly after this has happened. Unfortunately it is not sure that convergence will happen within reasonable time (or even whether it will happen at all) and in that case the algorithm must be stopped at a certain point [37]. In AEIA the latter criterion is used as a termination criterion. 45

54 CHAPTER 4- RESULTS Chapter 4 Results 4.1 Introduction In this chapter, the software testing the implemented proposed algorithm is shown. The software is intended for research purposes, not as a production system. Many features and components are included for experimental purposes and not necessarily expected to be proved useful. The proposed algorithm has been tested for different hypothetical cases. Cases where the algorithm performs well and where the algorithm fails are shown. Comparisons to existing approaches are also presented. 4.2 Asexual Evolutionary Inpainting Software The design of AEIA, discussed in the previous chapter, is implemented using Visual Basic version 6.0 for Windows, which made the pictures handling subroutines simple. The implemented software consists of two main parts. The first part is concerned with the selection of the region Ω. The second part is concerned with applying asexual evolutionary inpainting algorithm to fill in the selected region Ω. The following subsections clarify these parts separately. 46

55 CHAPTER 4- RESULTS The Selection of the Region Selection in digital image editing refers to the task of extracting (in some sense) an arbitrary object embedded in an image [41] It is clear that this is a user interface related problem and thus mainly a matter for the human computer interaction field. Thus, it is necessary to mention that the following discussion hardly touches the tip of the huge information related to this field. The most common and seemingly the easiest approach to select an object is by using fixed shapes such as rectangle or an ellipse. In this thesis, fixed rectangle is used to select a region. Figure (4.1) shows an example of a region selection through rectangle. (a) (b) Figure 4.1: Region selections via rectangle shape (a) Original image. (b) Image with selected region However, a fixed shape is not always the most appropriate way of performing selection. Another way to select a region is via freehand drawing. Selecting a region via freehand drawing is straightforward. The user simply draws a curve (not necessarily closed) and the enclosed area represents the selection (see Figure (4.2)). The curve is drawn by holding the mouse button while moving the mouse. The starting point p 1 and the ending point p 2 of the curve should be virtually the same ( p 2 should at least be in a close surrounding to p 1 in order to automatically decide a good approximation of the missing piece of the curve). 47

56 CHAPTER 4- RESULTS p 2 p 1 (a) (b) Figure 4.2: Region selections via freehand (a) the original image (b) the original image with selected region This method may be inaccurate due to a slippery mouse or other external affects. Hence, for each selected pixel, its diagonal neighbor pixels are selected automatically as Figure (4.3) clarifies. i-1, j-1 i, j i+1, j+1 Selected pixel Automatically selected pixel neighbors Figure 4.3: Selected pixel with the automatically selected pixel neighbors In order to study the robustness of the algorithm proposed here, and not to be too dependent on the marking of the regions to be inpainted, marking them in a very rough form with any available paintbrush software. Marking these regions in the examples reported in this thesis just takes a few seconds to a non-expert user. 48

57 CHAPTER 4- RESULTS Applying Asexual Evolutionary Inpainting Algorithm The technique we proposed does not require any user intervention, once the region to be inpainted has been selected. The algorithm is able to simultaneously fill regions surrounded by different backgrounds, without the user specifying what to put where. No assumptions on the topology of the region to be inpainted, or on the simplicity of the image, are made. The AEIA is devised for inpainting in structured regions (e.g., regions crossing through boundaries), as well as for reproducing large texture areas. For the proposed EA, the parameters setting used are as follows. Population size is set to twenty-five while three generations is enough to achieve good results. The mutation jump size is created randomly ranged from one to three. The boundary width Bwis set to ten. 4.3 Test Cases The cases that are targeted by the implemented algorithm are ubiquitous digital inpainting problem. The cases furthermore considered as general. Thereby, virtually all imagined cases are taken into account. Since the quality of the results apparently corresponds to the human perception of the appearance in the completed images, visually the results and comparisons were demonstrated without giving any quantified measurements. The proposed algorithm was applied now to a variety of images, ranging from purely synthetic image to full-color photographs that include complex textures. The testing is divided into two main areas. The first area is concerned with small arbitrary regions, while the second area is concerned with large arbitrary regions. The following subsections give the results obtained from applying AEIA to different cases. Where possible, side-by-side comparisons were made to previously proposed 49

58 CHAPTER 4- RESULTS methods. The original pictures are acquired from the original papers or from Adolfsson thesis [3]. In his thesis, Adolfeeon compares different inpainting algorithms Removing Small Arbitrary Regions In this case Einstein image with small selected regions not larger than pixels is used as tested image [3]. The tested regions are interesting since they are made up of structure (straight and curved lines), discontinuous intensity levels and texture. Figure (4.4) shows the original image and the tested mask image. Figure 4.4: Einstein before inpainting (a) the original image [3] (b) the mask image [3] Figure (4.5) shows the result for the digital inpainting algorithm [5], the exemplar-based inpainting algorithm [18], the fast digital inpainting algorithm (Figure (4.5) (c) is inpainted using 250 iterations) [20] and AEIA. When comparing the results the following conclusion may be drawn; each method has its advantages and shortcomings. 50

59 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.5: Einstein after inpainting (a) the digital inpainting algorithm [3] (b) the exemplar-based inpainting algorithm [3] (c) the fast digital inpainting algorithm with 250 iterations [3] (d) the AEIA inpainting algorithm 51

60 CHAPTER 4- RESULTS Regarding the regions containing lines (e.g., the region in the upper right corner, the region in the middle to the left and the hair), the AEIA and the digital inpainting algorithm seemingly performs best. The left and right vertical lines are connected, while the fast digital inpainting algorithm has disconnected the right line and introduced blurring on the left line. The exemplar-based algorithm has successfully completed the left line and partially succeeded in connecting the right line. Looking the discontinuous regions such as the left and the right shoulders and the right part of the shirt collar, the digital inpainting algorithm and AEIA also seemingly perform best. The AEIA has successfully reconstructing the shape of the shirt collar while the digital inpainting algorithm has (subjectively) performed well in recreating the shape of the shirt collar. The fast digital inpainting algorithm has introduced a significant blurred region and the exemplar-based inpainting algorithm seemingly has chosen wrong substitute patches. As for the shoulder regions the digital inpainting algorithm introduces a slight blurring on the right part suggesting that it is not suitable for regions that grows beyond a certain size. The same conclusion can be drawn about the fast digital inpainting algorithm. When the size of the region increases the blurring gets more visible. The AEIA completely fails in reconstructing the left shoulder region while performed well for the right regions. Looking upon the textured regions such as the jacket, AEIA completely successful in reconstructing these regions while the remaining algorithms perform well. With the exception of the digital inpainting algorithm and the fast digital inpainting algorithm which both introduces a significant blurring when the size of the region grows (as is clear by observing the images). 52

61 CHAPTER 4- RESULTS Removing Large Arbitrary Regions Since the digital inpainting algorithm and the fast digital inpainting algorithm as well as Grossauer algorithm [9], and Chong algorithm [42] fail to produce a visually plausible result when the region grows. It is not recommended to use the algorithms for regions growing larger than 9 pixels in any direction. Figure (4.6) shows the success of digital inpainting algorithm and AEIA for inpainting small region. A significant blurring, which is due to the diffusion process, is introduced. Hence, they are not taken into consideration in this section. Figures (4.7) and (4.8) illustrate comparisons of AEIA results with Grossauer and Chong results, respectively. The comparisons are presented only with algorithms that performed well for large regions. These algorithms are exemplar-based inpainting algorithm [18] and region completion algorithm [31]. More results are illustrated in the next section compared with different inpainting algorithms, if possible. The removal of arbitrary objects (e.g., persons, animals or road signs) is perhaps the hardest targeted case. It is easy to realize that a large region entails a typically non-trivial solution (i.e., in some sense a harder inpainting problem to solve). Consider a large object, e.g., the elephant in Figure (4.9)(a). The background structure that the elephant occludes is (most likely) the tree line, parts of the sand beach and parts of the water (i.e., a non-trivial background texture in some sense). Figure (4.9)(b) shows the mask, Figures (4.9)(c) and (4.9)(d) show the results after using exemplar-based inpainting and region completion algorithm, respectively. Finally, Figure (4.9)(e) shows our result after performing AEIA. 53

62 CHAPTER 4- RESULTS (a) (b) (c) (c) Figure 4.6: The flyer man image before and after inpainting (a) the original image [5] (b) the mask image (c) the result image from digital inpainting algorithm[5] (d) the result image from AEIA 54

63 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.7: The freedom image before and after inpainting (a) the original image [9] (b) the mask image (c) the result image from Grossauer algorithm[9] (d) the result image from AEIA 55

64 CHAPTER 4- RESULTS (a) (b) (c) Figure 4.8: The church image before and after inpainting (a) the original image [42] (b) the mask image (c) the result image from Chong algorithm [42] (d) the result image from AEIA (d) 56

65 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) Figure 4.9: The elephant image before and after inpainting (a) the original image [3] (b) the mask image [3] (c) the result image from exemplar-based algorithm [3] (d) the result image from region completion algorithm [31] (e) the result image from AEIA. 57

66 CHAPTER 4- RESULTS The results from applying exemplar-based inpainting algorithm, region completion algorithm and AEIA are from a completely subjective point of view, highly visually plausible. However, there are few minor details worth to notice from the result of exemplar-based inpainting algorithm and AEIA. In the result of exemplar-based inpainting algorithm (shown in figure (4.9)(c)) a small fraction above the tree line in the middle of the image looks slightly wrong. The front trees have been mixed with the background trees. Also there are some minor flaws down to the right on the sand beach, where the algorithm seemingly has used the same replacement patch several times. While when looking upon the result of AEIA, there are some discontinuous of the tree I the right of the image. Another case that may be considered is the removal of a window on a house wall. Most likely, the house wall is one colored, i.e., a simple background texture in some sense, and thereby may be considered as an easy inpainting problem. Figure (4.10)(a) shows such a case. Figure (4.10)(b) illustrates the house image with several selected regions. Figure (4.10)(c) and (4.10)(d) show the results after performing exemplar-based algorithm and AEIA, respectively. The exemplar-based algorithm and AEIA results are completely successful for filling-in the selected region. 58

67 CHAPTER 4- RESULTS (a) (b) (c) Figure 4.10: The house image before and after inpainting (a) the original image[3] (b) the mask image[3] (c) the result image from exemplar-based algorithm[3] (d) the result image from AEIA (d) Figure (4.11) and (4.12) show the removal of a surfer. In Figure (4.11)(b) the wave was left out of the mask while in Figure (4.12)(b) it was included. It is worth to note that result from performing exemplarbased algorithm in Figure (4.11)(c) a part of the wave is to be found in the middle of the image above the tree line while Figure (4.12)(c) does 59

68 CHAPTER 4- RESULTS not show the same artifact (since the wave is included in the mask and thus can not be chosen as a substitute patch). Also a slightly noticeable discontinuity is introduced below the tree line in the middle in Figure (4.11)(c). Looking the result from performing AEIA in Figure (4.11)(d) from a completely subjective point of view, highly visually plausible. (a) (b) (c) (d) Figure 4.11: The surfer image before and after inpainting (a) the original image[3] (b) the mask image without wave[3] (c) the result image from exemplar-based algorithm[3] (d) the result image from AEIA 60

69 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.12: The surfer image before and after inpainting (a) the original image[3] (b) the mask image with wave[3] (c) the result image from exemplar-based algorithm[3] (d) the result image from AEIA Figure (4.13)(a-d) shows the removal of a sign. Note that by performing exemplar-based inpainting algorithm the clouds are re-created very well, while the mountain down in the middle has a more obvious touch of manipulation to it (i.e., the mountain is not successfully recreated). The reverse situation occurs when using our algorithm- AEIA. Regarding Figure (4.14), the skier has, from a subjective point of view, 61

70 CHAPTER 4- RESULTS been flawlessly removed. Both algorithms exemplar-based inpainting algorithm and AEIA success in filling the skier image. (a) (b) (c) (d) Figure 4.13: The sign image before and after inpainting (a) the original image[3] (b) the mask image[3] (c) the result image from exemplar-based algorithm[3] (d) the result image from AEIA 62

71 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.14: The skier image before and after inpainting (a) the original image[3] (b) the mask image[3] (c) the result image from exemplar-based algorithm[3] (d) the result image from AEIA 63

72 CHAPTER 4- RESULTS Figures (4.15) to (4.18) illustrate the cases where exemplar-based inpainting algorithm fails in filling in the selected region. In Figures (4.16)(c) and (4.16)(f) exemplar-based inpainting algorithm fail even when running the algorithm again on new selected region. Figures (4.15) and (4.16) show the completely success of AEIA for filling in the selected region with appropriate pixels. In Figure (4.17)(d) AEIA fails in filling in the mask region but it is almost success when running the algorithm on the new selected region, as shown in Figure (4.17)(f). While in Figure (4.18)(d) the resulted image show that AEIA not perfectly filling in the mask region. (a) (b) (c) (d) Figure 4.15: The mountain image before and after inpainting (a) the original image [3] (b) the mask image[3] (c) the result image from exemplar-based algorithm[3] (d) the result image from AEIA 64

73 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) (f) (g) Figure 4.16: The caw image before and after inpainting (a) the original image[3] (b) the mask image[3] (c) the result image from exemplar-based algorithm[3] (d) (f) repeated use of the exemplar-based algorithm[3] (g) the result image from AEIA 65

74 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) (f) Figure 4.17: The picnic image before and after inpainting (a) the original image [3] (b) the mask image[3] (c) the result image from exemplar-based algorithm [3] (d) the result image from AEIA (e) (f) repeated use of the AEIA 66

75 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.18: The flyer man image before and after inpainting (a) the original image [18] (b) the mask image [18] (c) the result image from exemplar-based algorithm [18] (d) the result image from AEIA 67

76 CHAPTER 4- RESULTS Figures (4.19) and (4.20) illustrate more results from applying exemplar-based inpainting algorithm and AEIA. Exemplar-based inpainting algorithm produced image more visually than AEIA. In Figure (4.19)(d) AEIA fails to reconstruct the kerb texture correctly. When running the AEIA algorithm again on new selected region kerb texture looks plausible if not examined in detail. Looking upon Figure (4.20)(g), the AEIA has some flaws in the middle of the image. Where the rock is mixed with the river. Figure (4.20)(h) shows that the horizon is not correctly reconstructed as a straight line. From our experiment, the selected region effect on the performance of our algorithm as shown in Figure (4.21)(a) that is the same original image seen in Figure (4.20)(b) with different selected region (see Figure (4.20)(d) and Figure (4.21)(b)) implies different results. Figure (4.21)(c) shows that the horizon is correctly reconstructed as a straight line. 68

77 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) (f) Figure 4.19: The girl image before and after inpainting (a) the original image [18] (b) the mask image (c) the result image from exemplar-based algorithm [18] (d) the result image from AIEA (e) (f) repeated use of the AEIA 69

78 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) (f) (g) (h) Figure 4.20: The rock and sea image before and after inpainting (a) (b) the original image [18] (c) (d) the mask image (e) (f) the result image from exemplar-based algorithm [18] (g) (h) the result image from AEIA 70

79 CHAPTER 4- RESULTS (a) (b) (c) Figure 4.21: The sea image before and after inpainting (a) The original image (b) the mask image (c) the result image from AEIA To turn to some examples used for testing the region completion algorithm in addition to the elephant example shown in Figure (4.9)(d). The first example (see Figure (4.22)(a)) is an image taken of university campus containing a building, a car and a map board. The building in the background has a large noticeable projective deformation. Region completion algorithm and AEIA can remove the map board and restore the region of the building in the missing area reasonably as shown in Figure (4.22)(c) and Figure (4.22)(d) respectively. However, there are few minor details worth to notice from the result of region completion algorithm. A small fraction of grass in the middle of the image looks slightly wrong. The patch of grass has been combined with the wall of the building. Also there are some minor flaws also in the middle of the image where the grass is replaced by patch that contains Kerb. 71

80 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.22: The car image before and after inpainting (a) the original image [31] (b) the mask image (c) the result image from region completion algorithm [31] (d) the result image from AEIA The next tested image is used where the microphone needs to be removed in which the texture in the removed area cannot be constructed using digital inpainting algorithm. Figure (4.23)(a) shows the original image while Figure (4.23)(c) and Figure (4.23)(d) show the result of removing the microphone using region completion algorithm and AEIA, 72

81 CHAPTER 4- RESULTS respectively. The results obtained by these algorithms provide detailed and coherent results. (a) (b) (c) (d) Figure 4.23: The microphone image before and after inpainting (a) the original image [31] (b) the mask image (c) the result image from region completion algorithm [31] (d) the result image from AEIA 73

82 CHAPTER 4- RESULTS 4.4 More Results Now will turn to some examples from the different inpainting literatures. The entire mask images used in this section are used only with our algorithm to fill the images. The results then are compared with other results obtained from literatures. Figures (4.24) to (4.28) show that our evolutionary algorithm works at least as well as other tested algorithms except Figure (4.28)(d) where are some flaws in the middle of image. (a) (b) (c) (d) Figure 4.24: The arrow image before and after inpainting (a) the original image [29] (b) the mask image (c) the result image from image replacement algorithm[29] (d) the result image from AEIA 74

83 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.25: The donkey image before and after inpainting (a) the original image [29] (b) the mask image (c) the result image from image replacement algorithm[29] (d) the result image from AEIA 75

84 CHAPTER 4- RESULTS (a) (b) (c) (d) Figure 4.26: The ball boys image before and after inpainting (a) the original image [29] (b) the mask image (c) the result image from image replacement algorithm[29] (d) the result image from AEIA 76

85 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) (f) (g) (h) Figure 4.27: The roof and robot images before and after inpainting (a) (b) the original image [11] (c) (d) the mask image (e) (f) the result image from texture synthesis algorithm[11] (g) (h) the result image from AEIA 77

86 CHAPTER 4- RESULTS (a) (b) (c) (d) (e) (f) Figure 4.28: The water image before and after inpainting (a) the original image [11] (b) the mask image (c) the result image from texture synthesis algorithm[11] (d) the result image from AEIA (e) (f) repeated use of the AEIA 78

87 CHAPTER 4- RESULTS To test the capability of AEIA algorithm on removing scratch the toy image and vinca flower image are chosen as test images. Figure (4.29) shows the good quality results obtained from performing AEIA. (a) (b) (c) (d) (e) Figure 4.29: The toy and flower image before and after inpainting (a) (b) the original image (c) (d) the scratch image (e) (f) the result image from AEIA (f) 79

Object Removal Using Exemplar-Based Inpainting

Object Removal Using Exemplar-Based Inpainting CS766 Prof. Dyer Object Removal Using Exemplar-Based Inpainting Ye Hong University of Wisconsin-Madison Fall, 2004 Abstract Two commonly used approaches to fill the gaps after objects are removed from

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

ON THE ANALYSIS OF PARAMETERS EFFECT IN PDE- BASED IMAGE INPAINTING

ON THE ANALYSIS OF PARAMETERS EFFECT IN PDE- BASED IMAGE INPAINTING ON THE ANALYSIS OF PARAMETERS EFFECT IN PDE- BASED IMAGE INPAINTING 1 BAHA. FERGANI, 2 MOHAMED KHIREDDINE. KHOLLADI 1 Asstt Prof. MISC Laboratory, Mentouri University of Constantine, Algeria. 2 Professor.

More information

A Review on Image InpaintingTechniques and Its analysis Indraja Mali 1, Saumya Saxena 2,Padmaja Desai 3,Ajay Gite 4

A Review on Image InpaintingTechniques and Its analysis Indraja Mali 1, Saumya Saxena 2,Padmaja Desai 3,Ajay Gite 4 RESEARCH ARTICLE OPEN ACCESS A Review on Image InpaintingTechniques and Its analysis Indraja Mali 1, Saumya Saxena 2,Padmaja Desai 3,Ajay Gite 4 1,2,3,4 (Computer Science, Savitribai Phule Pune University,Pune)

More information

A Review on Design, Implementation and Performance analysis of the Image Inpainting Technique based on TV model

A Review on Design, Implementation and Performance analysis of the Image Inpainting Technique based on TV model 2014 IJEDR Volume 2, Issue 1 ISSN: 2321-9939 A Review on Design, Implementation and Performance analysis of the Image Inpainting Technique based on TV model Mr. H. M. Patel 1,Prof. H. L. Desai 2 1.PG Student,

More information

Tiled Texture Synthesis

Tiled Texture Synthesis International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 16 (2014), pp. 1667-1672 International Research Publications House http://www. irphouse.com Tiled Texture

More information

More details on presentations

More details on presentations More details on presentations Aim to speak for ~50 min (after 15 min review, leaving 10 min for discussions) Try to plan discussion topics It s fine to steal slides from the Web, but be sure to acknowledge

More information

Texture Synthesis and Manipulation Project Proposal. Douglas Lanman EN 256: Computer Vision 19 October 2006

Texture Synthesis and Manipulation Project Proposal. Douglas Lanman EN 256: Computer Vision 19 October 2006 Texture Synthesis and Manipulation Project Proposal Douglas Lanman EN 256: Computer Vision 19 October 2006 1 Outline Introduction to Texture Synthesis Previous Work Project Goals and Timeline Douglas Lanman

More information

The Development of a Fragment-Based Image Completion Plug-in for the GIMP

The Development of a Fragment-Based Image Completion Plug-in for the GIMP The Development of a Fragment-Based Image Completion Plug-in for the GIMP Cathy Irwin Supervisors: Shaun Bangay and Adele Lobb Abstract Recent developments in the field of image manipulation and restoration

More information

More Texture Mapping. Texture Mapping 1/46

More Texture Mapping. Texture Mapping 1/46 More Texture Mapping Texture Mapping 1/46 Perturbing Normals Texture Mapping 2/46 Perturbing Normals Instead of fetching a texture for color, fetch a new perturbed normal vector Creates the appearance

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Texture. The Challenge. Texture Synthesis. Statistical modeling of texture. Some History. COS526: Advanced Computer Graphics

Texture. The Challenge. Texture Synthesis. Statistical modeling of texture. Some History. COS526: Advanced Computer Graphics COS526: Advanced Computer Graphics Tom Funkhouser Fall 2010 Texture Texture is stuff (as opposed to things ) Characterized by spatially repeating patterns Texture lacks the full range of complexity of

More information

Image Composition. COS 526 Princeton University

Image Composition. COS 526 Princeton University Image Composition COS 526 Princeton University Modeled after lecture by Alexei Efros. Slides by Efros, Durand, Freeman, Hays, Fergus, Lazebnik, Agarwala, Shamir, and Perez. Image Composition Jurassic Park

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Image Sampling and Quantisation

Image Sampling and Quantisation Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction

More information

One-Point Geometric Crossover

One-Point Geometric Crossover One-Point Geometric Crossover Alberto Moraglio School of Computing and Center for Reasoning, University of Kent, Canterbury, UK A.Moraglio@kent.ac.uk Abstract. Uniform crossover for binary strings has

More information

Image Sampling & Quantisation

Image Sampling & Quantisation Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

IMA Preprint Series # 2016

IMA Preprint Series # 2016 VIDEO INPAINTING OF OCCLUDING AND OCCLUDED OBJECTS By Kedar A. Patwardhan Guillermo Sapiro and Marcelo Bertalmio IMA Preprint Series # 2016 ( January 2005 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS

More information

The Proposal of a New Image Inpainting Algorithm

The Proposal of a New Image Inpainting Algorithm The roposal of a New Image Inpainting Algorithm Ouafek Naouel 1, M. Khiredinne Kholladi 2, 1 Department of mathematics and computer sciences ENS Constantine, MISC laboratory Constantine, Algeria naouelouafek@yahoo.fr

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Image Inpainting and Selective Motion Blur

Image Inpainting and Selective Motion Blur Image Inpainting and Selective Motion Blur Gaurav Verma Dept. of Electrical Engineering, IIT Kanpur 14244, gverma@iitk.ac.in Abstract: This report is presented as a part of the coursework for EE604A, Image

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Automatic Logo Detection and Removal

Automatic Logo Detection and Removal Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Using Genetic Algorithms to Solve the Box Stacking Problem

Using Genetic Algorithms to Solve the Box Stacking Problem Using Genetic Algorithms to Solve the Box Stacking Problem Jenniffer Estrada, Kris Lee, Ryan Edgar October 7th, 2010 Abstract The box stacking or strip stacking problem is exceedingly difficult to solve

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations

Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations Mehran Motmaen motmaen73@gmail.com Majid Mohrekesh mmohrekesh@yahoo.com Mojtaba Akbari mojtaba.akbari@ec.iut.ac.ir

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

A Singular Example for the Averaged Mean Curvature Flow

A Singular Example for the Averaged Mean Curvature Flow To appear in Experimental Mathematics Preprint Vol. No. () pp. 3 7 February 9, A Singular Example for the Averaged Mean Curvature Flow Uwe F. Mayer Abstract An embedded curve is presented which under numerical

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations Multimedia Systems 03 Vector Graphics 2D and 3D Graphics, Transformations Imran Ihsan Assistant Professor, Department of Computer Science Air University, Islamabad, Pakistan www.imranihsan.com Lectures

More information

5. Feature Extraction from Images

5. Feature Extraction from Images 5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i

More information

Median filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today

Median filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:

More information

Non-linear filtering example

Non-linear filtering example Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

Tiled Textures What if Miro Had Painted a Sphere

Tiled Textures What if Miro Had Painted a Sphere Tiled Textures What if Miro Had Painted a Sphere ERGUN AKLEMAN, AVNEET KAUR and LORI GREEN Visualization Sciences Program, Department of Architecture Texas A&M University December 26, 2005 Abstract We

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Level lines based disocclusion

Level lines based disocclusion Level lines based disocclusion Simon Masnou Jean-Michel Morel CEREMADE CMLA Université Paris-IX Dauphine Ecole Normale Supérieure de Cachan 75775 Paris Cedex 16, France 94235 Cachan Cedex, France Abstract

More information

Introduction to Genetic Algorithms

Introduction to Genetic Algorithms Advanced Topics in Image Analysis and Machine Learning Introduction to Genetic Algorithms Week 3 Faculty of Information Science and Engineering Ritsumeikan University Today s class outline Genetic Algorithms

More information

An Improved Texture Synthesis Algorithm Using Morphological Processing with Image Analogy

An Improved Texture Synthesis Algorithm Using Morphological Processing with Image Analogy An Improved Texture Synthesis Algorithm Using Morphological Processing with Image Analogy Jiang Ni Henry Schneiderman CMU-RI-TR-04-52 October 2004 Robotics Institute Carnegie Mellon University Pittsburgh,

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Final Review CMSC 733 Fall 2014

Final Review CMSC 733 Fall 2014 Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,

More information

SYMMETRY-BASED COMPLETION

SYMMETRY-BASED COMPLETION SYMMETRY-BASED COMPLETION Thiago Pereira 1 Renato Paes Leme 2 Luiz Velho 1 Thomas Lewiner 3 1 Visgraf, IMPA 2 Computer Science, Cornell 3 Matmidia, PUC Rio Keywords: Abstract: Image completion, Inpainting,

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Active contour: a parallel genetic algorithm approach

Active contour: a parallel genetic algorithm approach id-1 Active contour: a parallel genetic algorithm approach Florence Kussener 1 1 MathWorks, 2 rue de Paris 92196 Meudon Cedex, France Florence.Kussener@mathworks.fr Abstract This paper presents an algorithm

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

(Refer Slide Time: 00:02:00)

(Refer Slide Time: 00:02:00) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 18 Polyfill - Scan Conversion of a Polygon Today we will discuss the concepts

More information

Combining Abstract Images using Texture Transfer

Combining Abstract Images using Texture Transfer BRIDGES Mathematical Connections in Art, Music, and Science Combining Abstract Images using Texture Transfer Gary R. Greenfield Department of Mathematics & Computer Science University of Richmond Richmond,

More information

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points Feature extraction Bi-Histogram Binarization Entropy What is texture Texture primitives Filter banks 2D Fourier Transform Wavlet maxima points Edge detection Image gradient Mask operators Feature space

More information

Sobel Edge Detection Algorithm

Sobel Edge Detection Algorithm Sobel Edge Detection Algorithm Samta Gupta 1, Susmita Ghosh Mazumdar 2 1 M. Tech Student, Department of Electronics & Telecom, RCET, CSVTU Bhilai, India 2 Reader, Department of Electronics & Telecom, RCET,

More information

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland March 19, 2014

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Vectorization Using Stochastic Local Search

Vectorization Using Stochastic Local Search Vectorization Using Stochastic Local Search Byron Knoll CPSC303, University of British Columbia March 29, 2009 Abstract: Stochastic local search can be used for the process of vectorization. In this project,

More information

Planting the Seeds Exploring Cubic Functions

Planting the Seeds Exploring Cubic Functions 295 Planting the Seeds Exploring Cubic Functions 4.1 LEARNING GOALS In this lesson, you will: Represent cubic functions using words, tables, equations, and graphs. Interpret the key characteristics of

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Automatic Tessellation of Images to Produce Seamless Texture Tiles. UC Berkeley CS : Final Project Alex Liu 14 December 2015

Automatic Tessellation of Images to Produce Seamless Texture Tiles. UC Berkeley CS : Final Project Alex Liu 14 December 2015 Automatic Tessellation of Images to Produce Seamless Texture Tiles UC Berkeley CS-194-26: Final Project Alex Liu 14 December 2015 Liu 1 Introduction Textures are one of the most important building blocks

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

A NON-HIERARCHICAL PROCEDURE FOR RE-SYNTHESIS OF COMPLEX TEXTURES

A NON-HIERARCHICAL PROCEDURE FOR RE-SYNTHESIS OF COMPLEX TEXTURES A NON-HIERARCHICAL PROCEDURE FOR RE-SYNTHESIS OF COMPLEX TEXTURES Paul Harrison School of Computer Science and Software Engineering Monash University Wellington Rd. Clayton, 3800 Melbourne, Australia pfh@yoyo.cc.monash.edu.au

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

CHAPTER 4 GENETIC ALGORITHM

CHAPTER 4 GENETIC ALGORITHM 69 CHAPTER 4 GENETIC ALGORITHM 4.1 INTRODUCTION Genetic Algorithms (GAs) were first proposed by John Holland (Holland 1975) whose ideas were applied and expanded on by Goldberg (Goldberg 1989). GAs is

More information

Shape fitting and non convex data analysis

Shape fitting and non convex data analysis Shape fitting and non convex data analysis Petra Surynková, Zbyněk Šír Faculty of Mathematics and Physics, Charles University in Prague Sokolovská 83, 186 7 Praha 8, Czech Republic email: petra.surynkova@mff.cuni.cz,

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Image Inpainting By Optimized Exemplar Region Filling Algorithm

Image Inpainting By Optimized Exemplar Region Filling Algorithm Image Inpainting By Optimized Exemplar Region Filling Algorithm Shivali Tyagi, Sachin Singh Abstract This paper discusses removing objects from digital images and fills the hole that is left behind. Here,

More information

morphology on binary images

morphology on binary images morphology on binary images Ole-Johan Skrede 10.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo After original slides

More information

Why study Computer Vision?

Why study Computer Vision? Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications building representations of the 3D world from pictures automated surveillance (who s doing what)

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Texture Synthesis by Non-parametric Sampling

Texture Synthesis by Non-parametric Sampling Texture Synthesis by Non-parametric Sampling Alexei A. Efros and Thomas K. Leung Computer Science Division University of California, Berkeley Berkeley, CA 94720-1776, U.S.A. fefros,leungtg@cs.berkeley.edu

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Scaled representations

Scaled representations Scaled representations Big bars (resp. spots, hands, etc.) and little bars are both interesting Stripes and hairs, say Inefficient to detect big bars with big filters And there is superfluous detail in

More information

Image Processing Via Pixel Permutations

Image Processing Via Pixel Permutations Image Processing Via Pixel Permutations Michael Elad The Computer Science Department The Technion Israel Institute of technology Haifa 32000, Israel Joint work with Idan Ram Israel Cohen The Electrical

More information

Parametric Texture Model based on Joint Statistics

Parametric Texture Model based on Joint Statistics Parametric Texture Model based on Joint Statistics Gowtham Bellala, Kumar Sricharan, Jayanth Srinivasa Department of Electrical Engineering, University of Michigan, Ann Arbor 1. INTRODUCTION Texture images

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Approximation Algorithms and Heuristics November 6, 2015 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff INRIA Lille Nord Europe 2 Exercise: The Knapsack Problem

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Classification of Optimization Problems and the Place of Calculus of Variations in it

Classification of Optimization Problems and the Place of Calculus of Variations in it Lecture 1 Classification of Optimization Problems and the Place of Calculus of Variations in it ME256 Indian Institute of Science G. K. Ananthasuresh Professor, Mechanical Engineering, Indian Institute

More information

Correcting User Guided Image Segmentation

Correcting User Guided Image Segmentation Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.

More information

Artificial Intelligence Application (Genetic Algorithm)

Artificial Intelligence Application (Genetic Algorithm) Babylon University College of Information Technology Software Department Artificial Intelligence Application (Genetic Algorithm) By Dr. Asaad Sabah Hadi 2014-2015 EVOLUTIONARY ALGORITHM The main idea about

More information

Filtering and Enhancing Images

Filtering and Enhancing Images KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.

More information

Computer Vision I - Basics of Image Processing Part 1

Computer Vision I - Basics of Image Processing Part 1 Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Modeling Plant Succession with Markov Matrices

Modeling Plant Succession with Markov Matrices Modeling Plant Succession with Markov Matrices 1 Modeling Plant Succession with Markov Matrices Concluding Paper Undergraduate Biology and Math Training Program New Jersey Institute of Technology Catherine

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information