Stereo-Foveation for Anaglyph Imaging

Size: px
Start display at page:

Download "Stereo-Foveation for Anaglyph Imaging"

Transcription

1 Stereo-Foveation for Anaglyph Imaging Arzu Coltekin (Çöltekin) Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, FIN Espoo, Finland hut.fi ABSTRACT For 1:1 displays and network visualization in stereo imaging, we suggest that foveation is a feasible and efficient compression method which also gives a good basis for Level of Detail (LOD) control. The demand for highest resolution images in the area(s) of interest is a generally wanted feature, but it is particularly important for photogrammetric 3D modelling where the precision might be highly required depending on the project. Particularly for 1:1 stereo-viewing in large screen displays such as panoramic screens or caves, the actual area of interest is much smaller than the whole screen. Instead of loading the whole image pair, we would foveate both images, and project them after the stereo-foveation. This principally gives us the best possible resolution in the area of interest and still a very good overview of the neighbouring areas to navigate and locate other areas of interest as the human eyes do throughout the non-uniform 3D image. We test the idea with an anaglyph pair, and create a hybrid model by combining algorithms that deal with anaglyph imaging; disparity maps, foveation and we create a LOD function for the resolution control along the z axis. Keywords: Foveation, space variant imaging, stereo imaging, anaglyph, visualization, level of detail management, photogrammetry. 1. INTRODUCTION Foveation is a widely used biologically motivated compression technique. It is often utilized for two dimensional images and videos, particularly when transmitting over a network in real time. It also attracts interest from camera research and robotics fields, where often the third dimension also matters. This method relies on the ways the human visual system (HSV) works. In a human visual system, the eyes and the brain process the visual input in a space-variant nature. When the object of interest is located, the eyes accommodate on that object. Once this happens, fovea keeps the object in focus and reconstructs the rest of the scene gradually more blurry towards the edges in all dimensions. When we say all dimensions, we mean also the depth. In other words, (most) people foveate in stereo. Photogrammetry works with stereo vision to recover depth from images. It deals with 3D modeling and measurements. For most of its tasks, high resolution images are the main input into a photogrammetric system. Videos too, can be utilized. The term videogrammetry refers to this kind of videographic photogrammetry. Even though photogrammetry traditionally has been aerial and recently a lot space-born (remote sensing), and mainly to produce maps; the close-range photogrammetry (also referred as non-topographic photogrammetry) has always been there as well. This branch of the field works with terrestrial images. The scale changes from city and building models down to microscopic imaging. In any case, the field has extensive interest in image geometry (less in radiometry though not excluded) and the accuracy of measurements of the 3D coordinates, based on the project requirements. All of the above said, photogrammetry and foveation did not seem to have met yet. Very little literature takes a closer look at the needs of this field and how stereo-foveation would fit in. There is an obvious need to manage the big images and the level of detail in the scene being inspected. Foveation lets humans successfully navigate with the peripheral

2 vision while providing the area of interest in full resolution, so it does on a display, which should be helpful also in photogrammetric tasks. This is particularly true in 1:1 visualizations like the one in Figure 1. Figure 1: An illustration of the display used for full-scale stereo visualizations in Helsinki Technical University s Institute of Photogrammetry and Remote Sensing. It is called The Stereodrome by the institute members. (Illustration by Henrik Haggrén) In this type of displays, panorama rooms and caves where the user is spatially immersed into the system, a big part of the scene is anyway lost because of natural foveation. After all one should take into account that; a stereoscopic display is an optical system whose final component is the human mind (Lipton, 1997). Rendering the naturally foveated parts in full resolution would be a clear waste of resources. In this paper we present a conceptual stereo-foveation model to provide the usual benefits of binocular foveation to photogrammetry and an implementation of it for anaglyph imaging. Anaglyph imaging was chosen for its simplicity, availability, low cost and platform independence. 2. BACKGROUND If we can understand how the perception works, our knowledge can be transferred into rules for displaying information. Following perception based rules, we can present our data in such a way that the important and informative patterns stand out (Ware, 2000). The points in the statement above are well taken by the researchers in several fields. Percepted world is being recorded, modelled and reconstructed based on our perceptions and beyond. The following terms that meet us in the literature for human vision based level of detail (LOD) management may be an indicator (Çöltekin, 2004): - Gaze directed - Gaze Contingent - Perceptually Driven - Eccentricity - Foveated All of these terms refer to a way to capture and simulate the human visual system and are widely met in literature. The research partly presented in this paper also gets its inspiration from human visual system.

3 2D foveation 2D foveation is also called Eccentritcy LOD and takes the visual acuity into account in lateral plane. The caption for figure 2 briefly explains while Figure 2 illustrates the concept. Figure 2: A rough illustration to show the concept of 2D foveation. The image is segmented in several layers and an image pyramid created, the resolution gradually decreases towards the periphery and then image is re-composed into one space variant result. Stereo-foveation (also referred as binocular foveation or 3D foveation) In virtual reality literature, the exact same concept; the imitation of human visual system taking stereo acuity into account is called depth of focus simulation. The robotics literature seems to use binocular foveation more often. There is a very large amount of literature on the topic that we will not cover here, but mention the most popular human visual system inspired theory. The stereo depth perception is said to be possible within the Panum s Fusional area, where the received input from each eye overlaps. Panum's Fusional Area Right eye Left eye α Screen β disparity = α - β Figure 3: Ware, C., Chapter 8 of Information Visualization: Perception for Design. 2000, San Fancisco: Morgan Kaufmann. There are a few different approaches to recover the depth that falls into this area, but the space limitation in this paper restricts us from introducing each of them. Please refer to Ware 2000, Luebke et al and Oshima et al for more on these methods.

4 3. IMPLEMENTATION The conceptual model of the implementation aims at a real time 3D foveation. That means the presence of a precise binocular eye tracking system at hand. In the current conditions we do not have the opportunity to test this with an eye tracking system due to hardware limitations. Therefore the pilot implementation takes a stereo pair as input, foveates the anaglyph image based on user interaction. With a precise binocular eye tracker, the system would exactly know where the user is looking at and that would be considered as the area of interest. Although, unfortunately in present-day computer graphics systems, particularly those that allow for real-time interaction, depth of focus is never simulated (Ware 2000). We try to animate how it would be like if there was a functioning eye tracker. In this implementation, the area of interest is selected interactively by the user by specifying a point by its image coordinates. This point is accepted as the center of foveation and the image is segmented using this as one of the parameters. After that a level of detail aware compression in all x,y,z directions is applied (stereo-foveation). Foveaglyph Since it is a foveation application for anaglyph imaging, it is named as foveaglyph. The program operates in the following order: 1) Camera is calibrated (pre-calculation). 2) The input images are corrected; lens distortion and affinity removed (pre-calculation). 3) Disparity map is calculated (pre-calculation). 4) Based on normal case of stereo geometry, the z values for each pixel are calculated. 5) Anaglyph image is created and foveation applied based on a simple geometric model. Even though the 3D foveation is applied directly to the anaglyph result, program allows the user to foveate a single image in 2D. Algorithms Foveaglyph uses original algorithms for making the anaglyph, 2D and 3D foveations, but uses Depth Discontinuities by Pixel-to-Pixel Stereo (p2p) by Stan Birchfield and Carlo Tomasi for image matching and generating the disparity map. Before the stereo pair is put to foveation process, the camera is calibrated; the lens distortions and affinity are corrected using Petteri Pöntinen s software which was developed for Helsinki Technical University s Institute of Photogrammetry and Remote Sensing. Image Matching and Disparity Map Calculation: Depth Discontinuities by Pixel-to-Pixel Stereo Shortly referred as p2p, this is an algorithm and a piece of code the authors Stan Birchfield and Carlo Tomasi distribute on a web site (Birchfield et. al ). The algorithm explained in Stanford University Technical Report STAN-CS-TR , July 1996 (Birchfield et.al.1996) In brief, as stated Birchfield et.al.1996 the features of the algorithm can be described as follows: Part I. Our stereo algorithm explicitly matches the pixels in the two images, leaving occluded pixels unpaired. Matching is based upon intensity alone without utilizing windows. Since the algorithm prefers piecewise constant disparity maps, it sacrifices depth accuracy for the sake of crisp boundaries, leading to precise localization of the depth discontinuities. Three features of the algorithm are worth noting: (1) unlike most stereo algorithms, it does not require texture throughout the images, making it useful in unmodified indoor settings, (2)

5 it uses a measure of pixel dissimilarity that is provably insensitive to sampling, and (3) it prunes bad nodes during the search, resulting in a running time that is faster than that f standard dynamic programming. Part II. After the scan lines are processed independently, the disparity map is post processed, leading to more accurate disparities and depth discontinuities. Both the algorithm and the postprocessor are fast, producing a dense disparity map in about 1.5 microseconds per pixel per disparity on a workstation. Calculating the Z values Normal Case of Stereo Once we have the disparity map, we have the parallax values for each pixel. Using this information, it is possible to recover the Z values. The geometry is simpler in so-called normal case of stereography. This case sometimes is referred as the ideal case. The normal case of geometry should be met while taking the pictures as in Figure 4: the optical axes of the two cameras are parallel, their image planes are coincident, and there is no vertical parallax. Then several similar triangles can be formed and the following formula can be used to calculate the Z value: Z = B c p x Where; B: Base distance between the two cameras c: Camera constant (focal length after calibration), P x : Parallax (See the appendix for the use of terms parallax and disparity) Figure 4: A bird s view graphic of the normal case of stereography. In this setting the optical axes of the two cameras are parallel and their image plane is coincident. The geometry is simple and suits well to anaglyph imaging, even though it is restricted to a careful image acquisition setting and not likely to be achieved by free hand. Forming the Image Pyramid A simple geometric model is used to calculate the image pyramid. This is being developed. The two most remote points are searched (also taking the max and min disparities for max depth) and the distance between the two is set as the maximum distance. D =

6 D max = B B 2 (( ). xb ( ). xa)) (Point of interest is a, each pixel visited is b.) Px a ) B B 2 + (( ). yb ( ). ya)) b) a ) B. c B. c 2 + (( ) ( )) b) a ) Once the maximum distance is known, the image is segmented to levels taking this and the user specified level of detail into account. Image pyramid is formed like in mip-mapping; first derived image is half the size of the original and the next one is the half of the number 2. Then, every pixel is visited and a decision is made from which element of the pyramid it should take its resolution. This can basically be expressed as: l = d. lmax / D max Where d is the distance between the PoI and the pixel to be determined, l is the the pixel s LOD, l max is maximum number of levels that is possible, D max is the maximum distance. A more detailed description of this algorithm and the developments will follow in future publications. A Nikon D100 is used to capture the images which have the following camera parameters when the focus is set to infinity, and the aperture (f stop) is set to 5.6 after calibration: Calibrated by Petteri Pöntinen and has the following intrinsic parameters in sub pixels:. Camera constant: Principle point: Here are some graphic results from foveaglyph: Figure 5: The image pyramid of 5 levels.

7 Approximate Point of interest 4. FUTURE WORK Figure 6: Stereo-foveated image. We are working on a more perceptually driven geometrical model for our LOD function for stereo foveation. The next step in this work will be to compare the performance of our method to its alternative approaches. Another future work will be to study the cognitive effects, by a user survey. Intention is to determine if the biological foveation done by the human eyes and the interference of the artificial stereo fovetion to the natural one creates a disturbance or strain in the viewer or not. 5. ACKNOWLEDGEMENTS The programming of foveaglyph would not be possible without Çağrı Çöltekin of RIPE NCC. I also owe thanks to Petteri Pöntinen for letting me use his calibration software and giving valuable practical advice. Last but not least, Henrik Haggrén has given inspiration and been in the center of discussion about the development of the ideas discussed in this paper.

8 APPENDIX On the use of the terms disparity and parallax: We often meet these two words used interchangeably. Lipton 1997 suggests that disparity is retinal (biological) and parallax is what happens on the screen. The rest of the people in the filed do not seem to differentiate. In the photogrammetric jargon, the word parallax is more commonly used and might refer to either retinal shift or the shift after the projection on image plane. Here is what Lipton says about the issue: Parallax and disparity are similar entities. Parallax is measured at the display screen, and disparity is measured at the retinae. When wearing our eyewear, parallax becomes retinal disparity. It is parallax which produces retinal disparity, and disparity in turn produces stereopsis. Parallax may also be given in terms of angular measure, which relates it to disparity by taking into account the viewer s distance from the display screen. (Lipton 1997). REFERENCES Birchfiled et al. Depth Discontinuities by Pixel-to-Pixel Stereo : Birchfiled et al. Stanford University Technical Report STAN-CS-TR , July 1996 Coltekin, A., Foveation Support and Current Photogrammetric Software, ISPRS Congress Proceedings, 2004, Istanbul Lipton, Lenny Stereographics Handbook, 1997 Luebke, D., Reddy, M., Cohen, J.D., Varshney A., Watson, B., Huebner, R., Chapter 8 of Level of Detail for 3D Graphics. Textbook. Morgan Kaufmann, series in Computer Graphics and Geometric Modeling. ISBN URL: Oshima, T., Yamamoto, H., Tamura, H. Gaze-Directed Adaptive Rendering for Interacting with Virtual Space Proceedings of VRAIS 96, 1996 Ware, C., Chapter 8 of Information Visualization: Perception for Design. 2000, San Fancisco: Morgan Kaufmann.

FOVEATION SUPPORT AND CURRENT PHOTOGRAMMETRIC SOFTWARE

FOVEATION SUPPORT AND CURRENT PHOTOGRAMMETRIC SOFTWARE FOVEATION SUPPORT AND CURRENT PHOTOGRAMMETRIC SOFTWARE A. Çöltekin PL1200, Helsinki University of Technology 02015, Espoo-Finland Arzu.Coltekin@hut.fi Commission V, WG V/2 KEY WORDS: ABSTRACT: Compression,

More information

PART A Three-Dimensional Measurement with iwitness

PART A Three-Dimensional Measurement with iwitness PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an

More information

Stereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. Vergence-Accommodation Conflict

Stereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. Vergence-Accommodation Conflict Stereo Graphics Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer

More information

Visual Rendering for VR. Stereo Graphics

Visual Rendering for VR. Stereo Graphics Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Stereo Graphics Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer

More information

Prof. Feng Liu. Spring /27/2014

Prof. Feng Liu. Spring /27/2014 Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual

More information

3D from Images - Assisted Modeling, Photogrammetry. Marco Callieri ISTI-CNR, Pisa, Italy

3D from Images - Assisted Modeling, Photogrammetry. Marco Callieri ISTI-CNR, Pisa, Italy 3D from Images - Assisted Modeling, Photogrammetry Marco Callieri ISTI-CNR, Pisa, Italy 3D from Photos Our not-so-secret dream: obtain a reliable and precise 3D from simple photos Why? Easier, cheaper

More information

Stereoscopic Models and Plotting

Stereoscopic Models and Plotting Stereoscopic Models and Plotting Stereoscopic Viewing Stereoscopic viewing is the way the depth perception of the objects through BINOCULAR vision with much greater accuracy. رؤيه البعد الثالث و االحساس

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy 3D from Photos Our not-so-secret dream: obtain a reliable and precise 3D from simple photos Why? Easier, less

More information

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful. Project 4 Results Representation SIFT and HoG are popular and successful. Data Hugely varying results from hard mining. Learning Non-linear classifier usually better. Zachary, Hung-I, Paul, Emanuel Project

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Human Visual Perception The human visual system 2 eyes Optic nerve: 1.5 million fibers per eye (each fiber is the axon from a neuron) 125 million rods (achromatic

More information

Calibration of IRS-1C PAN-camera

Calibration of IRS-1C PAN-camera Calibration of IRS-1C PAN-camera Karsten Jacobsen Institute for Photogrammetry and Engineering Surveys University of Hannover Germany Tel 0049 511 762 2485 Fax -2483 Email karsten@ipi.uni-hannover.de 1.

More information

Binocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception

Binocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception Binocular cues to depth PSY 310 Greg Francis Lecture 21 How to find the hidden word. Depth perception You can see depth in static images with just one eye (monocular) Pictorial cues However, motion and

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction

More information

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy Who am I? Marco Callieri PhD in computer science Always had the like for 3D graphics... Researcher at the

More information

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 23 November 2001 Two-camera stations are located at the ends of a base, which are 191.46m long, measured horizontally. Photographs

More information

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION ABSTRACT: Yun Zhang, Pingping Xie, and Hui Li Department of Geodesy and Geomatics Engineering, University of

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D. University of Texas at Arlington CSE 4392/5369 Introduction to Vision Sensing Dr. Gian Luca Mariottini, Ph.D. Department of Computer Science and Engineering University of Texas at Arlington WEB : http://ranger.uta.edu/~gianluca

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Adaptive Image Sampling Based on the Human Visual System

Adaptive Image Sampling Based on the Human Visual System Adaptive Image Sampling Based on the Human Visual System Frédérique Robert *, Eric Dinet, Bernard Laget * CPE Lyon - Laboratoire LISA, Villeurbanne Cedex, France Institut d Ingénierie de la Vision, Saint-Etienne

More information

Perception, Part 2 Gleitman et al. (2011), Chapter 5

Perception, Part 2 Gleitman et al. (2011), Chapter 5 Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional

More information

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213 Geometry of binocular imaging III : Wide-Angle and Fish-Eye Lenses Victor S. Grinberg M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,

More information

Computer Vision. Introduction

Computer Vision. Introduction Computer Vision Introduction Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 About this course Official

More information

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song CS 563 Advanced Topics in Computer Graphics Stereoscopy by Sam Song Stereoscopy Introduction Parallax Camera Displaying and Viewing Results Stereoscopy What is it? seeing in three dimensions creates the

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Trimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA

Trimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA Trimble VISION Ken Joyce Martin Koehler Michael Vogel Trimble Engineering and Construction Group Westminster, Colorado, USA April 2012 Trimble Engineering & Construction Group, 5475 Kellenburger Road,

More information

Rigid ICP registration with Kinect

Rigid ICP registration with Kinect Rigid ICP registration with Kinect Students: Yoni Choukroun, Elie Semmel Advisor: Yonathan Aflalo 1 Overview.p.3 Development of the project..p.3 Papers p.4 Project algorithm..p.6 Result of the whole body.p.7

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

CSc I6716 Spring D Computer Vision. Introduction. Instructor: Zhigang Zhu City College of New York

CSc I6716 Spring D Computer Vision. Introduction. Instructor: Zhigang Zhu City College of New York Introduction CSc I6716 Spring 2012 Introduction Instructor: Zhigang Zhu City College of New York zzhu@ccny.cuny.edu Course Information Basic Information: Course participation p Books, notes, etc. Web page

More information

TKK Institute of Photogrammetry and Remote Sensing Publications 1/2006 Espoo FOVEATION FOR 3D VISUALIZATION AND STEREO IMAGING Arzu Çöltekin

TKK Institute of Photogrammetry and Remote Sensing Publications 1/2006 Espoo FOVEATION FOR 3D VISUALIZATION AND STEREO IMAGING Arzu Çöltekin TKK Institute of Photogrammetry and Remote Sensing Publications 1/2006 Espoo 2006 FOVEATION FOR 3D VISUALIZATION AND STEREO IMAGING Arzu Çöltekin TKK Institute of Photogrammetry and Remote Sensing Publications

More information

Geometry of Multiple views

Geometry of Multiple views 1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the

More information

Natural Viewing 3D Display

Natural Viewing 3D Display We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

More information

Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth?

Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth? Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth? [Figure from Prados & Faugeras 2006] Shading Focus/defocus Images from same point of view, different camera parameters

More information

Perspective Projection in Homogeneous Coordinates

Perspective Projection in Homogeneous Coordinates Perspective Projection in Homogeneous Coordinates Carlo Tomasi If standard Cartesian coordinates are used, a rigid transformation takes the form X = R(X t) and the equations of perspective projection are

More information

Smoothing Region Boundaries in Variable Depth Mapping for Real Time Stereoscopic Images

Smoothing Region Boundaries in Variable Depth Mapping for Real Time Stereoscopic Images Smoothing Region Boundaries in Variable Depth Mapping for Real Time Stereoscopic Images Nick Holliman Department of Computer Science, University of Durham, Durham, United Kingdom ABSTRACT We believe the

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

lecture 10 - depth from blur, binocular stereo

lecture 10 - depth from blur, binocular stereo This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

PERCEIVING DEPTH AND SIZE

PERCEIVING DEPTH AND SIZE PERCEIVING DEPTH AND SIZE DEPTH Cue Approach Identifies information on the retina Correlates it with the depth of the scene Different cues Previous knowledge Slide 3 Depth Cues Oculomotor Monocular Binocular

More information

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012 Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Nikos Komodakis and Georgios Tziritas Computer Science Department, University of Crete E-mails: {komod,

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

COMP30019 Graphics and Interaction Perspective Geometry

COMP30019 Graphics and Interaction Perspective Geometry COMP30019 Graphics and Interaction Perspective Geometry Department of Computing and Information Systems The Lecture outline Introduction to perspective geometry Perspective Geometry Virtual camera Centre

More information

The Human Visual System!

The Human Visual System! ! The Human Visual System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 5! stanford.edu/class/ee267/!! nautilus eye, wikipedia! Dawkins, Climbing Mount Improbable,! Norton & Company,

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

Important concepts in binocular depth vision: Corresponding and non-corresponding points. Depth Perception 1. Depth Perception Part II

Important concepts in binocular depth vision: Corresponding and non-corresponding points. Depth Perception 1. Depth Perception Part II Depth Perception Part II Depth Perception 1 Binocular Cues to Depth Depth Information Oculomotor Visual Accomodation Convergence Binocular Monocular Static Cues Motion Parallax Perspective Size Interposition

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Precise Omnidirectional Camera Calibration

Precise Omnidirectional Camera Calibration Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

Why equivariance is better than premature invariance

Why equivariance is better than premature invariance 1 Why equivariance is better than premature invariance Geoffrey Hinton Canadian Institute for Advanced Research & Department of Computer Science University of Toronto with contributions from Sida Wang

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

3D Computer Vision. Introduction. Introduction. CSc I6716 Fall Instructor: Zhigang Zhu City College of New York

3D Computer Vision. Introduction. Introduction. CSc I6716 Fall Instructor: Zhigang Zhu City College of New York Introduction CSc I6716 Fall 2010 3D Computer Vision Introduction Instructor: Zhigang Zhu City College of New York zzhu@ccny.cuny.edu Course Information Basic Information: Course participation Books, notes,

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

STEREOSCOPIC IMAGE PROCESSING

STEREOSCOPIC IMAGE PROCESSING STEREOSCOPIC IMAGE PROCESSING Reginald L. Lagendijk, Ruggero E.H. Franich 1 and Emile A. Hendriks 2 Delft University of Technology Department of Electrical Engineering 4 Mekelweg, 2628 CD Delft, The Netherlands

More information

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS Xiaoyong Sun A Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for

More information

3D Image Sensor based on Opto-Mechanical Filtering

3D Image Sensor based on Opto-Mechanical Filtering 3D Image Sensor based on Opto-Mechanical Filtering Barna Reskó 1,2, Dávid Herbay 3, Péter Korondi 3, Péter Baranyi 2 1 Budapest Tech 2 Computer and Automation Research Institute of the Hungarian Academy

More information

International Society for Photogrammetry and Remote Sensing

International Society for Photogrammetry and Remote Sensing International Society or Photogrammetry and Remote Sensing Commission I: Sensors, Platorms, and Imagery Commission II: Systems or Data Processing, Analysis, and Representation Commission III: Theory and

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Stereoscopic Systems Part 1

Stereoscopic Systems Part 1 Stereoscopic Systems Part 1 Terminology: Stereoscopic vs. 3D 3D Animation refers to computer animation created with programs (like Maya) that manipulate objects in a 3D space, though the rendered image

More information

THE CUMULATIVE 3D DATA COLLECTION AND MANAGEMENT DURING ARCHAEOLOGICAL PROJECT

THE CUMULATIVE 3D DATA COLLECTION AND MANAGEMENT DURING ARCHAEOLOGICAL PROJECT THE CUMULATIVE 3D DATA COLLECTION AND MANAGEMENT DURING ARCHAEOLOGICAL PROJECT KATRI KOISTINEN, JAAKKO LATIKKA, PETTERI PöNTINEN Institute of Photogrammetry and Remote Sensing Helsinki University of Technology

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information

A dynamic programming algorithm for perceptually consistent stereo

A dynamic programming algorithm for perceptually consistent stereo A dynamic programming algorithm for perceptually consistent stereo The Harvard community has made this article openly available. Please share ho this access benefits you. Your story matters. Citation Accessed

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Computer Vision for Computer Graphics

Computer Vision for Computer Graphics Computer Vision for Computer Graphics Mark Borg Computer Vision & Computer Graphics I Computer Vision Understanding the content of an image (normaly by creating a model of the observed scene) Computer

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

MR-Mirror: A Complex of Real and Virtual Mirrors

MR-Mirror: A Complex of Real and Virtual Mirrors MR-Mirror: A Complex of Real and Virtual Mirrors Hideaki Sato 1, Itaru Kitahara 1, and Yuichi Ohta 1 1 Department of Intelligent Interaction Technologies, Graduate School of Systems and Information Engineering,

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

P1: OTA/XYZ P2: ABC c01 JWBK288-Cyganek December 5, :11 Printer Name: Yet to Come. Part I COPYRIGHTED MATERIAL

P1: OTA/XYZ P2: ABC c01 JWBK288-Cyganek December 5, :11 Printer Name: Yet to Come. Part I COPYRIGHTED MATERIAL Part I COPYRIGHTED MATERIAL 1 Introduction The purpose of this text on stereo-based imaging is twofold: it is to give students of computer vision a thorough grounding in the image analysis and projective

More information

Visual Pathways to the Brain

Visual Pathways to the Brain Visual Pathways to the Brain 1 Left half of visual field which is imaged on the right half of each retina is transmitted to right half of brain. Vice versa for right half of visual field. From each eye

More information

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0 zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008 H. Murat Yilmaz, Aksaray University,Turkey Omer Mutluoglu, Selçuk University, Turkey Murat Yakar, Selçuk University,Turkey Cutting and filling volume calculation are important issues in many engineering

More information

The Modelling of Stereoscopic 3D Scene Acquisition

The Modelling of Stereoscopic 3D Scene Acquisition 134 M. HASMANDA, K. IHA, THE MODEING OF STEEOSCOPIC 3D SCENE ACQUISITION The Modelling of Stereoscopic 3D Scene Acquisition Martin HASMANDA, Kamil IHA Dept. of Telecommunications, Brno University of Technology,

More information

Specialized Depth Extraction for Live Soccer Video

Specialized Depth Extraction for Live Soccer Video Specialized Depth Extraction for Live Soccer Video Axon Digital Design Eindhoven, University of Technology November 18, 2010 Introduction Related Work Proposed Approach Results Conclusion Questions Outline

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES DEPTH ESTMATON USNG STEREO FSH-EYE LENSES Shishir Shah and J. K. Aggamal Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 520 The University of Texas At Austin

More information

Virtual Reality ll. Visual Imaging in the Electronic Age. Donald P. Greenberg November 16, 2017 Lecture #22

Virtual Reality ll. Visual Imaging in the Electronic Age. Donald P. Greenberg November 16, 2017 Lecture #22 Virtual Reality ll Visual Imaging in the Electronic Age Donald P. Greenberg November 16, 2017 Lecture #22 Fundamentals of Human Perception Retina, Rods & Cones, Physiology Receptive Fields Field of View

More information