Shape from Texture: Surface Recovery Through Texture-Element Extraction

Size: px
Start display at page:

Download "Shape from Texture: Surface Recovery Through Texture-Element Extraction"

Transcription

1 Shape from Texture: Surface Recovery Through Texture-Element Extraction Vincent Levesque 1 Abstract Various visual cues are used by humans to recover 3D information from D images. One such cue is the distortion of textures due to the projection of the 3D world onto a D image plane. Blostein and Ahuja developed a technique that estimates the tilt and slant of a textured plane by analyzing the distortions of texture elements. The technique first locates candidate texture elements with a multiscale region detector. A model of the texture deformations is used to estimate the likelihood of various surface orientations given the area and position of the candidate texture elements. The search determines the most plausible tilt and slant for the surface. This paper revisits the theory proposed by Blostein and Ahuja, and presents the results of experimentations with a simplified version of their technique. and motion. Even though we know this to be true, we tend to perceive a surface covered with identical dots at a particular orientation with respect to the camera. The human perceptual system is thus capable of recognizing the distortion that results from the projection of a textured surface onto a D image. Index Terms Shape from texture, shape recovery, surface orientation, texture-element extraction, multiscale, surface estimation. (A) (B) I. INTRODUCTION The image formation process projects an image of the 3D world onto a D surface. This process is impossible to invert perfectly since information is invariably discarded by the projection. Our own experiences and psychophysical experiments show, however, that the human visual perceptual apparatus is capable of recovering detailed information about the 3D world from D images using various visual cues. It is believed that the human visual system is capable of deducting information about the 3D world from cues such as shading, contour, texture, stereo and motion. In this paper we are interested in the recovery of the tilt and slant of a textured surface through the analysis of distortions in the shape and size of texture elements. The slant is an angle between 0 and 90 o indicating the angle between the surface and the image plane [1]. The tilt is the angle between 0 and 360 o corresponding to the direction in which the surface normal projects in the image []. Informally the tilt indicates the direction in which the distance to the plane increases fastest and the slant indicates how fast the distance increases. Figure 1 illustrates the distortion of texture element on planes. The four pictures show elliptical dots of various size and shape in a D image. Super and Bovik [6] point out that this interpretation is reinforced by visual cues such as stereo vision 1 Vincent Levesque is a M.Eng. student at the Center for Intelligent Machines at McGill University. vleves@cim.mcgill.ca. (C) (D) Figure 1: Synthetic Textured Planes A: slant 5, tilt 70 ; B: slant 50, tilt 90 C: slant 60, tilt 5 ; D: slant 60, tilt 90 (Source: [1]) Blostein and Ahuja [1] proposed an algorithm that recovers the tilt and slant of a textured plane in a two-step process. First candidate texture elements (texels) are extracted from the image using a multiscale region detector originally presented in []. The region detector uses a set of filters of different sizes to locate circular regions of relatively uniform graylevel. Overlapping regions are combined to form candidate texels. The second step is to search for the most plausible surface orientation given the set of candidate texels and a model of the image formation process. A fitness function is used to evaluate the likelihood of various plane orientations. The plane with highest fit rating is selected as the most plausible plane. In this paper the theory proposed by Blostein and Ahuja in [1] and [] is revisited and results of experimentation with a simplified version of their algorithm are presented. Section II gives a brief overview of the shape from texture research over the past decades. Section III discusses in details the theory

2 proposed by Blostein and Ahuja. Section IV provides some details about the implementation used for the experimentation. Section V presents the experimental results obtained with the method. II. BACKGROUND Artists have known the importance of texture as a visual cue for the recovery of shape information for centuries. The first psychological theories of shape from texture are generally attributed to Gibson who insisted on human s reliance on gradients of features. Many researchers have since then proposed shape from texture techniques using a wide range of assumptions and approaches. Bajcsy and Lieberman [3] proposed a simple and elegant method to determine the orientation of a receding plane. Their method analyses texture characteristics using the Fourier transform of different windows. The wavelength observed inside a window is used as a measure of the texel size. A simple projection model is used to infer the relative distance between objects on the receding plane. Witkin [] argues that texel localization is not justified and can be avoided. He proposes instead a measure of texture uniformity using the distribution and orientation of edges. Statistical methods are then used to evaluate the likelihood of various plane orientations. The search for the most likely plane is very similar to Blostein and Ahuja s technique. As we will see later however, Blostein and Ahuja argue that Witkin s technique is bound to fail because of the noise produced by subtexture edges. Aloimonos [5] presented a novel projection model and various techniques to recover the orientation of a textured plane. It is shown that the plane orientation can be recovered if texels can be found but also under the less restrictive assumption that texel boundaries can be found. Super and Bovik [6] propose an approach that doesn t require texel localization. Gabor filters are used to analyze the image in the frequency domain and to determine the most plausible shape. This technique recovers the shape of a curved surface using closed form solutions and thus requires no surface fitting. These papers illustrate the variety of approaches to the problem of shape from texture. The various approaches differ mostly in their degree of reliance on texel extraction, their scope and their assumptions. texture element extractors have prompted many researchers to explore shape from texture methods that do not necessitate texel extraction. An example of this is Witkin s [] method based on the distribution of edges. Blostein and Ahuja argue that texel extraction is an essential part of any shape from texture algorithm. A texture is often formed of a hierarchy of texture elements and subtextures. Subtextures are clearly visible on parts of the plane close to the viewer but are blurred away from the viewer due to the limited resolution of the imaging process. More details are thus visible close to the viewer. Distinguishing texture elements from subtextures is thus impossible. Techniques that rely on the localization of texel boundaries cannot distinguish between texel and subtexture boundaries. Techniques that rely on edges are bound to suffer from the detection of edges within a texel. Blostein and Ahuja thus believe that texel extraction must be an integral part of a robust shape from texture algorithm. They propose the use of a multiscale region detector developed originally in []. The method initially locates circular regions of relatively uniform graylevel at different scales and then groups the disks to form texels. Blostein and Ahuja define an ideal disk as an image for which the graylevel is C within a circular region of diameter D and zero elsewhere. A closed form solution can be found at the center of the disk for the convolution of the image with a G filter (laplacian-of-gaussian) and with a (d/d) G filter of the same dimension. This result can be used to obtain an expression for the diameter D and contrast C of the disk as shown in equations 1 and. (1) D = G * I ( G * I ) + D /8 () C = e ( G * I ) πd The region detector requires the convolution of the image with six pairs of G and (d/d) G filters. The filters have standard deviations,, 3,, 5 and 6. The corresponding diameters are, 8, 1, 16, 0 and pixels. The filters are defined by equations 3 and. Convolutions were computed in the frequency domain by Blostein and Ahuja but were computed in the spatial domain in the current implementation. III. BLOSTEIN AND AHUJA A. Candidate Texel Extraction A texture typically consists of a repetition of similar patterns referred to as texture elements. Unsupervised identification and localization of texture elements is a complex task that requires assumptions to be made. The limitations of available (3) () r r / G( r) = e 6r r ( / ) G( r) = 7 e r /

3 3 The extrema of the response to the G filters are computed by examining the 8 surrounding pixels. Maxima are considered as candidates for positive contrast disks and minima for negative contrast disks. Equation 1 is used to compute the diameter of the disks. Only disks with a diameter differing from the corresponding filter diameter by at most pixels are accepted. Others are rejected in the hope that a better disk will be found at a different scale or, in other words, with a different filter. The contrast of accepted disks is computed with equations. Overlapping disks are then joined together to form a candidate texel. There is no way to know a-priori whether two overlapping disks are part of the same texels or part of the boundary between two adjacent texels. Blostein and Ahuja use a notion of concavity to determine if a texel should be split into multiple texels. If the concavity of two circles along the boundary of a texel falls within a given range, the texel is split in two. Both the single unified texel and the resulting two texels are considered as candidates. A mutual exclusion mechanism is used to prevent the two possibilities from contributing simultaneously to the surface fitting process. As explained in the next section texel splitting was not fully implemented. The result of this step is a set of candidate texture elements with an associated position, area and contrast. The contrast is computed as the weighted average of the contrast of the disks composing the region. B. Plane Fitting The texel extraction step produces a set of candidate texels. The plane fitting step searches through a large number of possible spatial configurations to determine the plane orientations that agrees with as many candidate texels as possible. Blostein and Ahuja derive equations 5 and 6 relating the area of a texel at a given position in the image plane to the tilt and slant of the plane. In these equations X and Y indicates the position of the texel in the range from 1 to 1. T indicates the tilt of the plane and S the slant. The r/f ratio is a measure of the field of view of the camera lens (r is the physical width of the film and f is the focal length). A c denotes the area of the texel at the center of the image and A i the expected area of the texel under consideration. (5) θ = tan 1 (( X cost + Y sint )( r / f )) (6) A = A ( 1 tanθ tan S) i c The actual area of the texel and its position are known. The r/f ratio is assumed to be known. The area of the centered texel (A c ) is not known. The plane fitting is thus done by considering a large number of values for the tilt (T: 0 o, 0 o,, 80 o, 300 o ), slant (S: 0 o, 5 o,, 75 o, 80 o ) and area (A c : 10, 0, 0, 80, 160, 30, 60). The fit rating, as computed with equations 7 and 8, is used to assess the agreement of the spatial configuration with the set of candidate texels. The spatial configuration with the best fit rating is taken to be the most plausible plane orientation. (7) 1(region fit rating = (region areas) region contrast e all regions max(expected area, actual area) (8) region fit = min(expected area, actual area) IV. IMPLEMENTATION fit) / The algorithm presented by Blostein and Ahuja was implemented in C++ on a personal computer with 18MB of memory and a 50MHz P3 processor. All test images were 51x51 56-bit grayscale images taken from the electronic copy of [1] or from the VisTex texture database [7]. The multiscale region detector was implemented in the spatial domain with filters of size corresponding to the filter diameter. Execution time is in the order of minutes. The region detector detects both disks of positive and negative contrast. For simplicity only positive contrast disks were taken into account in this implementation. Texels were formed by grouping overlapping disks together. Blostein and Ahuja propose a vaguely defined measure of concavity as a criterion for the splitting of texels. The exact way in which this concavity is measured could not be found. An alternate definition involving the tangents to adjacent circles at the point of intersection was devised. Integrating it into the algorithm requires the determination of which disks lie on the exterior boundary of the texel. Blostein and Ahuja also propose to keep both the initial unified texel and the resulting two texels as mutually exclusive texels. The splitting of the texels is an essential part of the algorithm. It does, however, complicate the implementation significantly and could not implemented for the experimentation shown here. Neglecting to split texels has no impact on images in which the texels are well spaced. It does, however, cause the method to break down on real images in which texels often overlap. It was observed that low contrast disks often join different true texels of high contrast into a very large texel. A simple heuristic was thus developed to split texels based on this observation. Whenever two disks overlap, a disk with a contrast lower than 30% of that of the other disk is deleted. Once regions are formed, any disk with a contrast lower than 30% of the maximum in the texel is also deleted. This tends to break down large texels without deleting true texels of low contrast. The plane fitting method was implemented following Blostein and Ahuja s guidelines. The plane fitting was limited to a single search and thus provides a precision of 5 degrees for the 3

4 slant and 0 degrees for the tilt. The results can be improved by recursively searching in limited ranges close to the current best plane estimate. This was not implemented. Blostein and Ahuja assume that the r/f ratio is known for the test images. In the following experiment the exact r/f ratio is not known. The best plane was thus computed for r/f ratios of 0.1, 0.5, 1, 1.5 and. V. RESULTS A. Simple dot patterns The algorithm was initially tested with the simple artificial dot patterns shown in Figure 1. These pictures were taken from the electronic copy of [1]. Figure shows the response of the different filters to the image shown in Figure 1, part D. Each filters captures only part of the original dot pattern. Figure 3 shows the result of combining all the disks together. The resulting image is almost identical to the original image. 0.5 and that the algorithm functions correctly for simple synthetic textures. Table 1: Result of plane fitting (true plane: T=90, S=60) r/f A c T S rating B. Tiles Figure : Tile Images (left: Tile.000.pgm, right: Tile.0001.pgm) Figure : Disks of positive contrast (input: fig. 1D) From left to right, top to bottom: =,,3,,5,6. Figure shows two real pictures of tiled floors. These textures are more challenging than the synthetic dot patterns of Figure 1 but still easier to analyze than natural textures. Notice that the picture on the left has large high contrast tiles whereas the one on the right has low contrast tiles divided in columns. The exact plane orientations are not known but we can assume that the picture on the left is close to the plane shown in Figure 1D (tilt=90 o, slant=60 o ) and the picture on the right to Figure 1B (tilt=90 o, slant=50 o ). 1) Tile.000.pgm Figure 3: Detected Regions (original: see fig. 1D) The texels are formed by joining overlapping disks. Since the dots are well spaced, the true texels do not overlap and the candidate texels correspond to single dots. The plane fitting then determines the most plausible plane orientation. Table 1 shows the result of the plane fitting for the plane shown in Figure 1D. The rating is highest for a r/f ratio of 0.5 and the corresponding tilt and slant are perfect at 90 o and 60 o respectively. Similarly the algorithm produces the correct tilt and slant for all three other planes of Figure 1 with a r/f ratio of 0.5. This strongly suggests that the true r/f ratio is close to Figure 5 shows the response of the six filters to the image on the left of Figure. The first image of Figure 6 shows the grouping of all disks together. The boundary of high contrast texels is well represented but the middle of the texels is not completely filled and some low contrast texels are found between the high contrast texels. The second image shows the candidate texels. It isn t clear from the pictures but the background of the first picture is almost entirely covered with low contrast disks. As a result most of the disks are connected to each other and a very large texel occupying the entire image is detected. This problem can be partially solved with the thresholding technique described in the previous section. The third picture of Figure 6 shows the candidate texels after thresholding. Most true texels are now disjoint and some of the true low contrast texels in the back of the plane were saved. The fourth picture shows the texels that contribute to the plane fitness rating for a r/f ratio of 1. The brightness of the texels is proportional to their contribution to the fit rating of the plane.

5 5 are reasonable. The last two ratios produce inconsistent results. Sadly the highest fit rating doesn t correspond to the best plane orientation. Table : Result of plane fitting (Tile.000.pgm) r/f A c T S rating ) Tile.0001.pgm Figure 7 shows the result of the algorithm on the second image of Figure. The image on the left shows the candidate texels. Notice that the texels include diamond-shaped high contrast tiles and triangle-shaped low contrast tiles. In this case thresholding is not necessary to split the texels. The picture on the right shows the subset of the texels that contributed to the fit rating of the best plane for a r/f ratio of 1.0. Table 3 shows the results for different r/f ratios. Again the plane orientation varies considerably depending on the r/f ratio. The best plane corresponding to a r/f ratio of 0.1 is somewhat reasonable with a tilt of 70 o and a slant of 80 o. Other results are not very good. Figure 5: Disks of Positive Contrast (original: see fig. left) From left to right, top to bottom: =,,3,,5,6. Figure 7: Result of Algorithm (Tile.0001.pgm) Left: Candidate texels. Right: Contributing texels (r/f = 1.0) Table 3: Result of Algorithm (Tile.0001.pgm) Figure 6: Result of Algorithm (Tile.000.pgm) Top left: All disks. (graylevel proportional to disk contrast) Top right: Texels. (graylevel proportional to texel contrast) Bottom left: Texels after thresholding (graylevel proportional to texel contrast) Bottom right: Contributing texels for r/f=1 (graylevel proportional to fitness) The search was extended to allow areas of up to 560 pixel since the tiles are very large. Table shows the result of the algorithm. The first three results for ratios of 0.1, 0.5 and 1.0 r/f A c T S rating C. Natural Textures Figure 8, Figure 9 and Figure 10 show three examples of natural textures: two pictures of the sea and a picture of a grass field. The images on the right are the detected disks for each of the three textures. Numerous disks are detected in each case 5

6 6 and the density of disks is highest for parts of the plane farthest from the viewer. As a result larger texels form in parts of the planes away for the viewer where texels should in fact be smaller. The algorithm thus tends to invert the tilt of the plane. Even without this problem, however, the algorithm would not do well. The detected disks are very close to each other and thus it is very difficult to determine which subsets should form texels. The current implementation based on thresholding with respect to the contrast of the disks would not be sufficient to break large texels into smaller, true texels. The texel splitting method of Blostein and Ahuja based on concavity would probably do better although the results might still not be satisfactory. plausible plane orientation as well as the true texels out of a set of candidate texels. The texel extraction implementation was simplified by neglecting to split large texels based on the concavity of their boundary. A thresholding based on the contrast of disks was used instead. This change weakened the algorithm significantly but provided satisfactory results in some restricted cases. The algorithm was shown to produce perfect results when analyzing synthetic textures with simple dot patterns. The algorithm was also shown to produce reasonable results with two pictures showing simple patterns of tiles. The algorithm failed on natural textures. The implementation is thus much weaker than the original implementation that was meant for such natural textures. The assumption that r/f ratio is known also caused significant problems. The exact ratio was not known for any of the test images and the result of the algorithm varied significantly for ratios in the range 0.1 to.0. Figure 8: Water.0000.pgm (left: image; right: disks) The results obtained are sufficient to show the potential of the algorithm on synthetic textures and on simple man-made textures. A faithful implementation of the texture element extraction algorithm should be sufficient to provide good results on natural textures. ACKNOWLEDGMENT The author would like to express his gratitude to the authors of the papers discussed in this paper for sharing their results with the academic community. VI. CONCLUSION Figure 9: Water.000.pgm (left: image; right: disks) Figure 10: GrassLand.0001.pgm (left: image; right: disks) In this paper the shape from texture method proposed by Blostein in Ahuja in [1] was revisited and results of experimentation with a simplified implementation of their algorithm were presented. This shape from texture method differs from other methods that preceded it in its acknowledgement of the importance of texture element extraction. A multiscale texel extraction method is integrated with a surface estimation technique to choose the most REFERENCES [1] D. Blostein and N. Ahuja, Shape from texture: integrating texture-element extraction and surface estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 1, pp , December [] D. Blostein and N. Ahuja, A Multiscale Region Detector, Computer Vision, Graphics and Image Processing, vol. 5, no. 1, pp. -1, Jan [3] R. Bajcsy and L. Lieberman, Texture gradient as a depth cue, Computer Graphics and Image Processing, vol. 5, pp. 5-67, [] A. P. Witkin, Recovering surface shape and orientation from texture, Artificial Intelligence, vol. 17, pp. 17-5, [5] J. Aloimonos, Shape from texture, Biological Cybernetics, vol. 58, pp , [6] B. J. Super and A. C. Bovik, Shape from texture using local spectral Moments, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no., pp , april [7] VisTex Texture Database: x.html 6

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Texture Boundary Detection - A Structural Approach

Texture Boundary Detection - A Structural Approach Texture Boundary Detection - A Structural Approach Wen Wen Richard J. Fryer Machine Perception Research Group, Department of Computer Science University of Strathclyde, Glasgow Gl 1XH, United Kingdom Abstract

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,

More information

Silhouette Coherence for Camera Calibration under Circular Motion

Silhouette Coherence for Camera Calibration under Circular Motion Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE

More information

Correcting User Guided Image Segmentation

Correcting User Guided Image Segmentation Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Color, Edge and Texture

Color, Edge and Texture EECS 432-Advanced Computer Vision Notes Series 4 Color, Edge and Texture Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 628 yingwu@ece.northwestern.edu Contents

More information

What is Computer Vision?

What is Computer Vision? Perceptual Grouping in Computer Vision Gérard Medioni University of Southern California What is Computer Vision? Computer Vision Attempt to emulate Human Visual System Perceive visual stimuli with cameras

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Perspective and vanishing points

Perspective and vanishing points Last lecture when I discussed defocus blur and disparities, I said very little about neural computation. Instead I discussed how blur and disparity are related to each other and to depth in particular,

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Feature Detectors - Canny Edge Detector

Feature Detectors - Canny Edge Detector Feature Detectors - Canny Edge Detector 04/12/2006 07:00 PM Canny Edge Detector Common Names: Canny edge detector Brief Description The Canny operator was designed to be an optimal edge detector (according

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Assignment 3: Edge Detection

Assignment 3: Edge Detection Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse

More information

Image Sampling and Quantisation

Image Sampling and Quantisation Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction

More information

Image Sampling & Quantisation

Image Sampling & Quantisation Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example

More information

Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection

Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection Shahryar Ehsaee and Mansour Jamzad (&) Department of Computer Engineering, Sharif University of Technology,

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Visible Surface Determination Visibility Culling Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Divide-and-conquer strategy:

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

T vides important cues for recovering the three-dimensional

T vides important cues for recovering the three-dimensional IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. II. NO. 12. DECEMBER 1989 1233 Shape from Texture: Integrating Texture-Element Extraction and Surface Estimation Abstract-A perspective

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

2D image segmentation based on spatial coherence

2D image segmentation based on spatial coherence 2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval

A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval Farzin Mokhtarian, Sadegh Abbasi and Josef Kittler Centre for Vision Speech and Signal Processing Department

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

A = 1 y 0 sin σ + f sin τcosσ f cos τ

A = 1 y 0 sin σ + f sin τcosσ f cos τ Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 1 Kuching, Malaysia, November 1-4, 1 NORMAL ESTIMATION FROM TEXTURE USING A NOVEL GABOR FILTER BANK Guangyao LIU 1 and Suguru SAITO

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model

Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model This paper appears in: IEEE International Conference on Systems, Man and Cybernetics, 1999 Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS

CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS B.Vanajakshi Department of Electronics & Communications Engg. Assoc.prof. Sri Viveka Institute of Technology Vijayawada, India E-mail:

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary) Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu

3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform. Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform Wenqi Zhu wenqizhu@buffalo.edu Problem Statement! 3D reconstruction 3D reconstruction is a problem of recovering depth information

More information

Aberrations in Holography

Aberrations in Holography Aberrations in Holography D Padiyar, J Padiyar 1070 Commerce St suite A, San Marcos, CA 92078 dinesh@triple-take.com joy@triple-take.com Abstract. The Seidel aberrations are described as they apply to

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

3-D Interpretation of Imperfect Line Drawings

3-D Interpretation of Imperfect Line Drawings 3-D Interpretation of Imperfect Line Drawings Ronald Chung Kin-lap Leung Department of Mechanical Sz Automation Engineering The Chinese University of Hong Kong, Shatin, Hong Kong E-mail: rchung@mae.cuhk.hk

More information

6.801/866. Segmentation and Line Fitting. T. Darrell

6.801/866. Segmentation and Line Fitting. T. Darrell 6.801/866 Segmentation and Line Fitting T. Darrell Segmentation and Line Fitting Gestalt grouping Background subtraction K-Means Graph cuts Hough transform Iterative fitting (Next time: Probabilistic segmentation)

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision Quiz I Handed out: 2004 Oct. 21st Due on: 2003 Oct. 28th Problem 1: Uniform reflecting

More information

Estimating the 3D orientation of texture planes using local spectral analysis

Estimating the 3D orientation of texture planes using local spectral analysis Image and Vision Computing 18 (2000) 619 631 www.elsevier.com/locate/imavis Estimating the 3D orientation of texture planes using local spectral analysis E. Ribeiro 1, E.R. Hancock* Department of Computer

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Texture Segmentation Using Multichannel Gabor Filtering

Texture Segmentation Using Multichannel Gabor Filtering IOSR Journal of Electronics and Communication Engineering (IOSRJECE) ISSN : 2278-2834 Volume 2, Issue 6 (Sep-Oct 2012), PP 22-26 Texture Segmentation Using Multichannel Gabor Filtering M. Sivalingamaiah

More information

Texture Segmentation Using Gabor Filters

Texture Segmentation Using Gabor Filters TEXTURE SEGMENTATION USING GABOR FILTERS 1 Texture Segmentation Using Gabor Filters Khaled Hammouda Prof. Ed Jernigan University of Waterloo, Ontario, Canada Abstract Texture segmentation is the process

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

CS201 Computer Vision Lect 4 - Image Formation

CS201 Computer Vision Lect 4 - Image Formation CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Spring 2005 Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Where are we? Image Formation Human vision Cameras Geometric Camera

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Chapter - 2: Geometry and Line Generations

Chapter - 2: Geometry and Line Generations Chapter - 2: Geometry and Line Generations In Computer graphics, various application ranges in different areas like entertainment to scientific image processing. In defining this all application mathematics

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

PHYSICS 106. Assignment #10 Due by 10 pm Tuesday April 13, DISCUSSION SECTION: [ ] D1 W 9 am [ ] D2 W 10 am [ ] HS W 10 am

PHYSICS 106. Assignment #10 Due by 10 pm Tuesday April 13, DISCUSSION SECTION: [ ] D1 W 9 am [ ] D2 W 10 am [ ] HS W 10 am PHYSICS 106 Assignment #10 Due by 10 pm Tuesday April 13, 010 NAME: DISCUSSION SECTION: [ ] D1 W 9 am [ ] D W 10 am [ ] HS W 10 am [ ] D3 W 11 am [ ] D4 W 1 pm [ ] D5 W 1 pm (Sophie) [ ] D6 W 1 pm (Nima)

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

Perception, Part 2 Gleitman et al. (2011), Chapter 5

Perception, Part 2 Gleitman et al. (2011), Chapter 5 Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Module 1 Session 1 HS. Critical Areas for Traditional Geometry Page 1 of 6

Module 1 Session 1 HS. Critical Areas for Traditional Geometry Page 1 of 6 Critical Areas for Traditional Geometry Page 1 of 6 There are six critical areas (units) for Traditional Geometry: Critical Area 1: Congruence, Proof, and Constructions In previous grades, students were

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Iris Recognition for Eyelash Detection Using Gabor Filter

Iris Recognition for Eyelash Detection Using Gabor Filter Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition

More information

Ideal observer perturbation analysis reveals human strategies for inferring surface orientation from texture

Ideal observer perturbation analysis reveals human strategies for inferring surface orientation from texture Vision Research 38 (1998) 2635 2656 Ideal observer perturbation analysis reveals human strategies for inferring surface orientation from texture David C. Knill * Department of Psychology, Uni ersity of

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information