Texture Generation for the Computer Representation of the Upper Gastrointestinal System

Size: px
Start display at page:

Download "Texture Generation for the Computer Representation of the Upper Gastrointestinal System"

Transcription

1 A. Gastélum, P. Delmas, J. Márquez, Texture Generation for the Computer Representation of the Upper Gastrointestinal System, Proceedings of Image and Vision Computing New Zealand 2007, pp , Hamilton, New Zealand, December Texture Generation for the Computer Representation of the Upper Gastrointestinal System Gastélum Alfonso 1, Patrice Delmas 1 and Márquez Jorge 2 1 University of Auckland, Tamaki Campus, New Zealand 2 Laboratory of Analysis of Images and Visualization, CCADET, UNAM, México valdenar@gmail.com, agas012@ec.auckland.ac.nz Abstract We previously introduced a system to obtain textures using the lens system of a video-endoscope; in this work we present our most recent advances. Our goal is to build a system which obtains textures for our 3D model of the oesophagus and in a later stage, to build new meshes from individual video endoscopies. The importance of obtaining a database of textures for our model is related to the necessity of training new specialist in endoscopies for the detection of diseases characterised by abnormal colour patterns. The colour extraction is based on the physical properties of the endoscope s lens and associated illumination system. We obtain a group of 3-dimensional coordinates with RGB values that will be related to the actual 3D model. To relate texture as captured from endoscopy videos with actual depth (z coordinate) we used a second camera to record the external view of the endoscope insertion. Keywords: textures, mesh colour diseases representation, mesh building. 1 Introduction Computer training in endoscopical procedures allows the specialist to interact with a virtual model and provides different points of view of the anatomical area of interest. Such enriched navigation permits the specialist to have a better understanding of the whole anatomical volume. To complete a computational training system with our upper gastrointestinal model and navigator, reported in [1], we developed a navigation environment that allows a user to explore the model and train on anomaly detection. Computer models for endoscopies must provide a realistic environment [4], one important part being the texture. The endoscopic procedures are strongly related to optical inspection. The availability of a library for common disease textures will help the training of new specialists. The goal of this work is to present a method which obtains the above mentioned textures and corresponding depth information and map them to our actual 3D model. The navigation system may challenge the specialist with various near real case scenarios. 2 Problem The original texture of our model was obtained from the Visible Human Project (VHP) data base [2]. As the colour of the images in the data base was altered by the post-mortem condition, it does not offer a realistic texture representation of the oesophagus (as shown in Fig. 1). Figure 1: a) Real endoscopy image, b) Model of the oesophagus mapped with the colours extracted from the VHP database. To overcome this, we replaced the original texture by more realistic ones computed from video-endoscopies of the oesophagus. 305

2 The procedure comprises four stages: The arrangement of the cam recorders is shown in figure The preliminary depth values are obtained using the properties of the lens system. The external webcam is placed above the head of the patient, so it can record all the procedure without been obstructed by the specialist or the patient. This camera provides a reading of the amount of endoscopic tube being inserted in the patient, as shown in Fig In order to obtain a better depth value a procedure relating the luminance of a pixel to the depth has been developed The images are weighted in order to decide if they present useful information 4. - The results are processed to obtain a final RGB value for our textures. a 3 Procedure Since our model is built to serve as a training system, the navigation must provide the closest possible realistic environment. One improvement is to present textures that resemble the real ones. The video camera at the tip of the endoscope records the inner walls of the oesophagus but there is no inbuilt routine way to record the camera s depth inside the patient. Figure 3: Image from the external video taken with a webcam. Point (a) shows the depth marks in the endoscope tube. Redo figure 3 with just an arrow pointing to depth mark and the letter a It was therefore necessary to use an external video camera to capture the depth values as inscribed in the endoscope tube side. The video from the endoscope camera provides the internal view of the organ (Fig.4). The z-coordinate assigned to each frame is the one obtained from the external video camera. After obtaining the endoscopy and endoscope tube depth videos, we built a timetable that related them. The external camera provides the z-coordinate for each frame while the video-endoscopies provided the RGB information. The depth value and the video-endoscopies are used to obtain a collection of concentric isolines. They depend on the local illumination and the lens distortion for each frame. Isolines are used to obtain the final colour map. Figure 4: Image from the internal walls of the oesophagus D coordinates projections In order to obtain a texture from the collection of 2D frames, we use the optical properties of the endoscopes lens and the light propagation phenomena to obtain luminance isolines that represent the depth in each individual frame. The objective is to obtain from each frame a collection of points that gives the (x, y, z) position coordinates and associated RGB value. The z coordinate is constructed from the initial depth (as given by the external video camera) and depthdependant classification of the endoscopic images luminance value. Figure 2: Endoscopic experimental arrangement. (a) Is the external video camera 3.1 Optical System The first step consists in the synchronised acquisition of the video endoscopies and the external support video. This gives us the opportunity to build a timetable that related both videos frame by frame. Figure 5 shows an example (similar to the endoscopic case) of a 3D surface plot obtained with Imagej and the plugin Imagej3D. The surface plot takes the intensity value as a representation of depth. 306

3 Figure 5: Example of a 3D surface build using intensity values of a 2D image. 3.3 Endoscope Lens Optical System The properties of the field of view for wide angle lens ([3], [5]) are used as one of the parameters to map 2D image vertices to 3D point representation. We consider the ideal case of a cylinder where the greater the angle of view, the closer the object gets to the lens, along the Z axis (Fig. 6): z a (1) Where is the angle of view and a is a scalar factor from the lens. 2. Since the centre of the endoscope and the oesophagus are not the same, we lose the symmetry as shown in the cylinder model (Fig. 6b) The presence of tissue folds is the most difficult problem related to segmentation, because we cannot characterise 3D folds using only one image, and occluded regions will appear as black shadows in our model. To map the colour value from the images to the vertex table, the distance from the correct centre to the pixel of interest is calculated. All the pixels having the same distance will belong to a contour. Each one of these contours will have the same depth value. d Figure 7: Contour formed by pixels at the same distance from the centre. Figure 6: a) Ideal representation of the oesophagus. The sphere represents the lens position. b) Side view of the cylinder, shows the relation of the angle of view with the depth This gives us the z-coordinate first approximation for each pixel. Later on (see section 2.2.2) we will use the luminance value of endoscopic images, to improve the z-coordinate value estimation for any given pixel of the oesophagus. There are important differences between a real oesophagus (images show in figure 4) and a cylindrical model. Three problems occur when analyzing real endoscopies images: 1. - The centre of the anatomical structure (oesophagus) and the centre of the lens are not always the same. To correct this, we need to find a new relation for the centre, using a correcting term x. From each representative distance we obtain a list of pixels, the number (N) of pixels that belong to a certain contour will vary. The number of contour (N c ) obtained depends on the spectrum of angle of view values used and the separation between angles value. Each contour will belong to an specific angle of view value: Max( ) Min( ) N c m Where m is the number of quantization steps. (2) As explained in point 1 of our problems, the lens may not be positioned in the centre of the oesophagus or its shape will not be cylindrical and symmetrical. To account for these cases we have to obtain a new centre for the image which will help us reinstate the oesophagus cylindrical symmetry paradigm. Taking advantage of the cylindrical symmetry of the endoscopic images intensities we traced plot profiles from the centre of the image, at different angles as shown in figure 8. From the plot (see Fig.8 d) we obtain the distribution of the zone with the lowest intensity values and the centre of this zone becomes the centre of the image. 307

4 We first transform from the red, green, blue (RGB) colour space into the HSL colour space: Figure 8: a) Original image b) Gaussian filtered images c) Profile lines d) Plotted profile resulting from different angles 3.4 Lightness colour Space. Next, we study the endoscopic frames colour properties in the Hue, Saturation and Lightness (HSL) colour space. The only sources of illumination, inside the oesophagus, are the two light-guide lens located at the tip of the endoscope (Fig 9). We next use the information of this channel to obtain a better representation of the depth properties in the image. Figure 9: a) Endoscope distal tip. Since the illumination system can be seen as two punctual source of light, we can evaluate the decay of the intensity of light to characterise our 2D images. Considering reflection free images, the intensity value at different distances is described in figure 10. The values were obtained from the manufacturer (Moritex) of the light-guide lens. Figure 11: a) Process for obtaining the L channel from each from of the video-endoscopies. From left to right: a) original RGB image; b) image showing the L channel; c) same image with a Different LUT (ImageJ fire ) From each of the resulting L-channel image we extract iso-contours having the same L-value. To obtain a better result we first applied a Gaussian filter to the images to remove small changes in lightness. Next, a z-value is assigned to each pixel depending on its L-value. Figure 12 shows a schematic view of the use of the L- value. The coordinates forming the surfaces are at identical (x, y) image coordinates while z is given by their L-value. Next, each group of coordinates is transformed to polar coordinate system. Figure 10: a) Cone representation of the endoscope light system. Considering this behaviour we can assume that the intensity of the light in each pixel will predominantly depend only on the distance between the light source and the object at the moment of the recording. Figure 12: Depth field map obtained from the image L channel. From left to right: a) isolines representing the light changes; b) resulting surface; c) surface with the RGB colour as texture. d) Different angle of view. 308

5 3.5 Image Classification. For the system to obtain isolines that classified the depth at which a pixel is in frame, the frame must fulfil some properties. The selection of the frames is important so the process does not consider frames that will not introduce new texture info to the averaged RGB colour. First the algorithm checks if it is possible to obtain a centre for the images. The images must exhibit a darker area which ensures depth. Next, we check if the images have sufficient isolines to differentiate enough depth levels. To do so the algorithm computes the L-channel histogram and its standard deviation. To be kept, an image histogram must present a Gaussian-like distribution. Figure 15: a) top: Original images showing a frame where the lens is to close to the esophagus wall; Down: L-channel; Right: Isolines b) Above: Original Highly asymmetric image; Down: L-channel; Right: Isolines c) Above: Original saturation of light in the corner might resemble symmetry but the frame is discarded; Down: L-channel; Right: Isolines. Figure 13: Endoscopy image and corresponding histogram. From top to bottom: rejected frame; b) accepted frame. Finally after re-centring the images, the isolines must fulfil the following heuristics: 1) The isolines with bigger radius must have also the highest lightness value. 2) If the isoline has a small radius and a high value of lightness then the isoline must belong to the corners of the image 3.6 Texture model Now we have a system to obtain isolines from our images. We also have two different ways to calculate the z-values for each frame of the video-endoscopies. The next step is to build the texture model. First, we applied the procedure described in points and to all the frames in the videoendoscopies. For each frame we obtain a group of pixels with angle and depth values, one from the lens analysis and one from the lightness. We build a data table containing the angle and the total depth value (initial depth of the endoscope distal tip plus the one obtained for each contour of the image). At this point we do an average of the depth from each technique, and after that a final average for all the points that have the same total depth. In the end we have a group of points with angle, depth and RGB values. The 3D triangle mesh model can be segmented in 2D contours; we select the contour of interest using the initial depth given by the endoscope tube values plus each one of the d values from the images. Figure 14: Endoscopy images and corresponding isolines. From top to bottom: a) original image, b) re-centred image. We project the pixel to our 3D model using the angle from each pixel and the radius from the 2D contour. The coordinates of the pixels in the 3D model would be: 309

6 p R q R c c cos( ) sin( ) (4) The vertex positions that are not mapped from the pixels will obtain theirs RGB values from the colour value of the neighbours by bilinear interpolation. Figure 16 shows different stage of the newer texture been mapped on to the model The frame classification described in section 3.5 allowed to better cluster suitable and unsuitable images for texture retrieval purpose. This improved the accurate detection and discarding of images not giving information related to the RGB value, due to light saturation, (losing anatomy) or lack of symmetry. The next step is to build a comprehensive library of different diseases represented by the colour textures obtained from different video-endoscopies and map them to our system. 5 Acknowledgements We would like to thank Dr Jose Luis Mosso for providing the video-endoscopies. Figure 16: Different stages of the colour mapping over the model. After obtaining a unique RGB value for each z- coordinate and angle, we need to eliminate the effect of differences in illumination. We do a transformation from the initial RGB value of each pixel to the HSL value and normalize all L values to a single value. Our 3D navigator simulated the lighting as emitted from the endoscope tip. We combined the original Hue and Saturation pixel value with the L value obtained by the simulated light. References [1] Gastélum Alfonso and Márquez Jorge, "Construction of a model of the upper gastrointestinal system for the simulation of Gastroesophagoendoscopic procedures". VIII Mexican symposium on medical physics, American Institute of Physics 724: [2] V. Spitzer, M. J. Ackerman, A. L. Scherzinger, and D. Whitlock, The Visible Human Male: A Technical Report. J. of the Am. Medical Informatics Assoc 3(2): [3] Tsai RY, A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robotics 3: Figure 17: a) Real endoscopy image, b) Model of the esophagus mapped with the colour from a videoendoscopies. 4 Conclusions We described current work to incorporate a disease or healthy-related colour texture to our computer model of the oesophagus. This information is obtained from actual video-endoscopies, with the goal of providing the user with a more realistic experience when using our model. Another advantage is that, in real procedures, it is very important for the specialist to be able to distinguish the colour pattern of some disease like GERD [6]. Building such library will allow the user to train in the detection of different anatomical conditions. [4] Morten Bo-Nelsen, Joseph L. Tasto, Richard Cunningham, Gregory L. Merril, Preop endoscopic simulator: A pc-based immersive training System for bronchoscopy. Studies in health technology and informatics 62: [5] F. Devernay and Faugeras, Straight Lines Have to Be Straight: Automatic Calibration and Removal of Distortion from Scenes of Structured Environments. Machine Vision and Applications 1: [6] ASGE Publication, The role of endoscopy in the management of GERD. Gastrointest. Endosc.49(6):

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera

Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Automatic Reconstruction of 3D Objects Using a Mobile Monoscopic Camera Wolfgang Niem, Jochen Wingbermühle Universität Hannover Institut für Theoretische Nachrichtentechnik und Informationsverarbeitung

More information

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Computed Photography - Final Project Endoscope Exploration on Knee Surface 15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the

More information

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo Computer Graphics Bing-Yu Chen National Taiwan University The University of Tokyo Introduction The Graphics Process Color Models Triangle Meshes The Rendering Pipeline 1 What is Computer Graphics? modeling

More information

Computer Graphics and Image Processing Introduction

Computer Graphics and Image Processing Introduction Image Processing Computer Graphics and Image Processing Introduction Part 3 Image Processing Lecture 1 1 Lecturers: Patrice Delmas (303.389 Contact details: p.delmas@auckland.ac.nz Office: 303-391 (3 rd

More information

Computer Vision. The image formation process

Computer Vision. The image formation process Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 The image

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

3D graphics, raster and colors CS312 Fall 2010

3D graphics, raster and colors CS312 Fall 2010 Computer Graphics 3D graphics, raster and colors CS312 Fall 2010 Shift in CG Application Markets 1989-2000 2000 1989 3D Graphics Object description 3D graphics model Visualization 2D projection that simulates

More information

A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES

A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES Andre R. Sousa 1 ; Armando Albertazzi 2 ; Alex Dal Pont 3 CEFET/SC Federal Center for Technological Education of Sta. Catarina

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics. Bing-Yu Chen National Taiwan University Computer Graphics Bing-Yu Chen National Taiwan University Introduction The Graphics Process Color Models Triangle Meshes The Rendering Pipeline 1 INPUT What is Computer Graphics? Definition the pictorial

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Calibration Procedure for 3D Surface Measurements using Stereo Vision and Laser Stripe

Calibration Procedure for 3D Surface Measurements using Stereo Vision and Laser Stripe using Stereo Vision and Laser Stripe João L. Vilaça Jaime Fonseca A. C. Pinho joaovilaca@dei.uminho.pt Jaime@dei.uminho.pt acpinho@dem.uminho.pt Industrial Electronic Department, Minho University, Guimarães,

More information

Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression

Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression Hybrid Cone-Cylinder Codebook Model for Foreground Detection with Shadow and Highlight Suppression Anup Doshi and Mohan Trivedi University of California, San Diego Presented by: Shaurya Agarwal Motivation

More information

CS 4620 Midterm, March 21, 2017

CS 4620 Midterm, March 21, 2017 CS 460 Midterm, March 1, 017 This 90-minute exam has 4 questions worth a total of 100 points. Use the back of the pages if you need more space. Academic Integrity is expected of all students of Cornell

More information

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation

More information

OP-C16 VERSATILITY FOR ALL TECHNIQUES. The OP-C16 microscope manufactured by OPTOMIC fulfils the most demanding requirements.

OP-C16 VERSATILITY FOR ALL TECHNIQUES. The OP-C16 microscope manufactured by OPTOMIC fulfils the most demanding requirements. microscope OP-C16 OP-C16 The OP-C16 microscope manufactured by OPTOMIC fulfils the most demanding requirements. The OP-C16 microscope offers one of the widest ranges of lighting systems and beam splitters,

More information

Non-Provisional Patent Application # 14,629,633. Measuring Visual Cylinder Using a Two-Dimensional Surface

Non-Provisional Patent Application # 14,629,633. Measuring Visual Cylinder Using a Two-Dimensional Surface Non-Provisional Patent Application # 14,629,633 Measuring Visual Cylinder Using a Two-Dimensional Surface Inventors: Reid Laurens, Allan Hytowitz, Alpharetta, GA (US) 5 ABSTRACT OF THE DISCLOSURE Geometrical

More information

Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System

Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System Journal of Image and Graphics, Volume 2, No.2, December 2014 Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System M. G. Eljdid Computer Sciences Department, Faculty of

More information

TSBK03 Screen-Space Ambient Occlusion

TSBK03 Screen-Space Ambient Occlusion TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Illumination and Shading

Illumination and Shading Illumination and Shading Light sources emit intensity: assigns intensity to each wavelength of light Humans perceive as a colour - navy blue, light green, etc. Exeriments show that there are distinct I

More information

Mobile Robot Navigation Using Omnidirectional Vision

Mobile Robot Navigation Using Omnidirectional Vision Mobile Robot Navigation Using Omnidirectional Vision László Mornailla, Tamás Gábor Pekár, Csaba Gergő Solymosi, Zoltán Vámossy John von Neumann Faculty of Informatics, Budapest Tech Bécsi út 96/B, H-1034

More information

Advances in Metrology for Guide Plate Analysis

Advances in Metrology for Guide Plate Analysis Advances in Metrology for Guide Plate Analysis Oxford Lasers Ltd Overview Context and motivation Latest advances: Automatic entrance hole measurement Hole shape analysis Debris detection File format We

More information

Calypso Construction Features. Construction Features 1

Calypso Construction Features. Construction Features 1 Calypso 1 The Construction dropdown menu contains several useful construction features that can be used to compare two other features or perform special calculations. Construction features will show up

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Previously... contour or image rendering in 2D

Previously... contour or image rendering in 2D Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line

More information

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision.

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision. Announcements HW1 has been posted See links on web page for readings on color. Introduction to Computer Vision CSE 152 Lecture 6 Deviations from the lens model Deviations from this ideal are aberrations

More information

Lighting & 3D Graphics. Images from 3D Creative Magazine

Lighting & 3D Graphics. Images from 3D Creative Magazine Lighting & 3D Graphics Images from 3D Creative Magazine Contents Introduction Definitions 3D Lighting Basics 3D Light Sources Lighting Controls & Effects Brightness & Colour Shadows Hotspot And Falloff

More information

A High Speed Face Measurement System

A High Speed Face Measurement System A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555

More information

Altering Height Data by Using Natural Logarithm as 3D Modelling Function for Reverse Engineering Application

Altering Height Data by Using Natural Logarithm as 3D Modelling Function for Reverse Engineering Application IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Altering Height Data by Using Natural Logarithm as 3D Modelling Function for Reverse Engineering Application To cite this article:

More information

Chapter 12 Notes: Optics

Chapter 12 Notes: Optics Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU 1. Introduction Face detection of human beings has garnered a lot of interest and research in recent years. There are quite a few relatively

More information

INSPECTION OF THE TURBINE BLADES USING SCANNING TECHNIQUES

INSPECTION OF THE TURBINE BLADES USING SCANNING TECHNIQUES INSPECTION OF THE TURBINE BLADES USING SCANNING TECHNIQUES H. Nieciag, M. Traczynski and Z. Chuchro Department of Geometrical Quanities Metrology The Institute of Metal Cutting, 30-011 Cracow, Poland Abstract:

More information

Photorealism: Ray Tracing

Photorealism: Ray Tracing Photorealism: Ray Tracing Reading Assignment: Chapter 13 Local vs. Global Illumination Local Illumination depends on local object and light sources only Global Illumination at a point can depend on any

More information

Chapter 2 - Fundamentals. Comunicação Visual Interactiva

Chapter 2 - Fundamentals. Comunicação Visual Interactiva Chapter - Fundamentals Comunicação Visual Interactiva Structure of the human eye (1) CVI Structure of the human eye () Celular structure of the retina. On the right we can see one cone between two groups

More information

Camera Calibration Utility Description

Camera Calibration Utility Description Camera Calibration Utility Description Robert Bryll, Xinfeng Ma, Francis Quek Vision Interfaces and Systems Laboratory The university of Illinois at Chicago April 6, 1999 1 Introduction To calibrate our

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

Laboratory of Applied Robotics

Laboratory of Applied Robotics Laboratory of Applied Robotics OpenCV: Shape Detection Paolo Bevilacqua RGB (Red-Green-Blue): Color Spaces RGB and HSV Color defined in relation to primary colors Correlated channels, information on both

More information

Game Programming. Bing-Yu Chen National Taiwan University

Game Programming. Bing-Yu Chen National Taiwan University Game Programming Bing-Yu Chen National Taiwan University What is Computer Graphics? Definition the pictorial synthesis of real or imaginary objects from their computer-based models descriptions OUTPUT

More information

4. Refraction. glass, air, Perspex and water.

4. Refraction. glass, air, Perspex and water. Mr. C. Grima 11 1. Rays and Beams A ray of light is a narrow beam of parallel light, which can be represented by a line with an arrow on it, in diagrams. A group of rays makes up a beam of light. In laboratory

More information

Horus: Object Orientation and Id without Additional Markers

Horus: Object Orientation and Id without Additional Markers Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky

More information

Pop Quiz 1 [10 mins]

Pop Quiz 1 [10 mins] Pop Quiz 1 [10 mins] 1. An audio signal makes 250 cycles in its span (or has a frequency of 250Hz). How many samples do you need, at a minimum, to sample it correctly? [1] 2. If the number of bits is reduced,

More information

EXPERIENCE THE POWER OF LIGHT

EXPERIENCE THE POWER OF LIGHT EXPERIENCE THE POWER OF LIGHT GASTROENTEROLOGY MAKING YOUR DAILY WORK EASIER Fujifilm is a pioneer in diagnostic imaging and information systems for healthcare facilities. Today, Fujifilm is also engaging

More information

HISTOGRAMS OF ORIENTATIO N GRADIENTS

HISTOGRAMS OF ORIENTATIO N GRADIENTS HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients

More information

Vision-Based Technologies for Security in Logistics. Alberto Isasi

Vision-Based Technologies for Security in Logistics. Alberto Isasi Vision-Based Technologies for Security in Logistics Alberto Isasi aisasi@robotiker.es INFOTECH is the Unit of ROBOTIKER-TECNALIA specialised in Research, Development and Application of Information and

More information

Non-axially-symmetric Lens with extended depth of focus for Machine Vision applications

Non-axially-symmetric Lens with extended depth of focus for Machine Vision applications Non-axially-symmetric Lens with extended depth of focus for Machine Vision applications Category: Sensors & Measuring Techniques Reference: TDI0040 Broker Company Name: D Appolonia Broker Name: Tanya Scalia

More information

then assume that we are given the image of one of these textures captured by a camera at a different (longer) distance and with unknown direction of i

then assume that we are given the image of one of these textures captured by a camera at a different (longer) distance and with unknown direction of i Image Texture Prediction using Colour Photometric Stereo Xavier Lladó 1, Joan Mart 1, and Maria Petrou 2 1 Institute of Informatics and Applications, University of Girona, 1771, Girona, Spain fllado,joanmg@eia.udg.es

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Image Analysis - Lecture 1

Image Analysis - Lecture 1 General Research Image models Repetition Image Analysis - Lecture 1 Magnus Oskarsson General Research Image models Repetition Lecture 1 Administrative things What is image analysis? Examples of image analysis

More information

Product information. Hi-Tech Electronics Pte Ltd

Product information. Hi-Tech Electronics Pte Ltd Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,

More information

Representing and Computing Polarized Light in a Ray Tracer

Representing and Computing Polarized Light in a Ray Tracer Representing and Computing Polarized Light in a Ray Tracer A Technical Report in STS 4600 Presented to the Faculty of the School of Engineering and Applied Science University of Virginia in Partial Fulfillment

More information

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40 Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40 Note 1: Both the analytical problems and the programming assignments are due at the beginning of class on Nov 15,

More information

CS770/870 Spring 2017 Color and Shading

CS770/870 Spring 2017 Color and Shading Preview CS770/870 Spring 2017 Color and Shading Related material Cunningham: Ch 5 Hill and Kelley: Ch. 8 Angel 5e: 6.1-6.8 Angel 6e: 5.1-5.5 Making the scene more realistic Color models representing the

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Three-Dimensional Computer Vision

Three-Dimensional Computer Vision \bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision

More information

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations Multimedia Systems 03 Vector Graphics 2D and 3D Graphics, Transformations Imran Ihsan Assistant Professor, Department of Computer Science Air University, Islamabad, Pakistan www.imranihsan.com Lectures

More information

Available Online through

Available Online through Available Online through www.ijptonline.com ISSN: 0975-766X CODEN: IJPTFI Research Article ANALYSIS OF CT LIVER IMAGES FOR TUMOUR DIAGNOSIS BASED ON CLUSTERING TECHNIQUE AND TEXTURE FEATURES M.Krithika

More information

Mirrored LH Histograms for the Visualization of Material Boundaries

Mirrored LH Histograms for the Visualization of Material Boundaries Mirrored LH Histograms for the Visualization of Material Boundaries Petr Šereda 1, Anna Vilanova 1 and Frans A. Gerritsen 1,2 1 Department of Biomedical Engineering, Technische Universiteit Eindhoven,

More information

Micro-scale Stereo Photogrammetry of Skin Lesions for Depth and Colour Classification

Micro-scale Stereo Photogrammetry of Skin Lesions for Depth and Colour Classification Micro-scale Stereo Photogrammetry of Skin Lesions for Depth and Colour Classification Tim Lukins Institute of Perception, Action and Behaviour 1 Introduction The classification of melanoma has traditionally

More information

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks

More information

Essential Physics I. Lecture 13:

Essential Physics I. Lecture 13: Essential Physics I E I Lecture 13: 11-07-16 Reminders No lecture: Monday 18th July (holiday) Essay due: Monday 25th July, 4:30 pm 2 weeks!! Exam: Monday 1st August, 4:30 pm Announcements 250 word essay

More information

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller

Scalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller Scalar Data CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

COS Lecture 10 Autonomous Robot Navigation

COS Lecture 10 Autonomous Robot Navigation COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Classification and Detection in Images. D.A. Forsyth

Classification and Detection in Images. D.A. Forsyth Classification and Detection in Images D.A. Forsyth Classifying Images Motivating problems detecting explicit images classifying materials classifying scenes Strategy build appropriate image features train

More information

Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo

Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo Investigation of Directional Filter on Kube-Pentland s 3D Surface Reflectance Model using Photometric Stereo Jiahua Wu Silsoe Research Institute Wrest Park, Silsoe Beds, MK45 4HS United Kingdom jerry.wu@bbsrc.ac.uk

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

NAME :... Signature :... Desk no. :... Question Answer

NAME :... Signature :... Desk no. :... Question Answer Written test Tuesday 19th of December 2000. Aids allowed : All usual aids Weighting : All questions are equally weighted. NAME :................................................... Signature :...................................................

More information

Introduction to Computer Graphics with WebGL

Introduction to Computer Graphics with WebGL Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science Laboratory University of New Mexico Image Formation

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Multisensor Coordinate Measuring Machines ZEISS O-INSPECT

Multisensor Coordinate Measuring Machines ZEISS O-INSPECT Multisensor Coordinate Measuring Machines ZEISS O-INSPECT Having all the necessary options for reliable measurements. ZEISS O-INSPECT // RELIABILITY MADE BY ZEISS 2 The O-INSPECT multisensor measuring

More information

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture Texture CS 475 / CS 675 Computer Graphics Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. Lecture 11 : Texture http://en.wikipedia.org/wiki/uv_mapping

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture CS 475 / CS 675 Computer Graphics Lecture 11 : Texture Texture Add surface detail Paste a photograph over a surface to provide detail. Texture can change surface colour or modulate surface colour. http://en.wikipedia.org/wiki/uv_mapping

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Image Measuring Instrument

Image Measuring Instrument EASY QUICK ACCURATE SAVE Time & Cost Improved efficiency & accuracy L26 All new Image Measuring Instrument Top Series come with new innovative design in structural quality, functionality, and accuracy,

More information

Welcome to: Physics I. I m Dr Alex Pettitt, and I ll be your guide!

Welcome to: Physics I. I m Dr Alex Pettitt, and I ll be your guide! Welcome to: Physics I I m Dr Alex Pettitt, and I ll be your guide! Physics I: x Mirrors and lenses Lecture 13: 6-11-2018 Last lecture: Reflection & Refraction Reflection: Light ray hits surface Ray moves

More information

Introducing Robotics Vision System to a Manufacturing Robotics Course

Introducing Robotics Vision System to a Manufacturing Robotics Course Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

Alicona Specifications

Alicona Specifications Alicona Specifications The Alicona optical profilometer works using focus variation. Highest Specifications Table 1: Highest specification for optical profilometer parameters. Parameter Specification *Vertical

More information

Computer Graphics and Image Processing Ray Tracing I

Computer Graphics and Image Processing Ray Tracing I Computer Graphics and Image Processing Ray Tracing I Part 1 Lecture 9 1 Today s Outline Introduction to Ray Tracing Ray Casting Intersecting Rays with Primitives Intersecting Rays with Transformed Primitives

More information

Computer Simulation of Prostate Surgery

Computer Simulation of Prostate Surgery Computer Simulation of Prostate Surgery Miguel Angel Padilla, Felipe Altamirano, Fernando Arámbula and Jorge Marquez Image Analysis and Visualization Lab., Centro de Ciencias Aplicadas y Desarrollo Tecnológico

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Towards Autonomous Vision Self-Calibration for Soccer Robots

Towards Autonomous Vision Self-Calibration for Soccer Robots Towards Autonomous Vision Self-Calibration for Soccer Robots Gerd Mayer Hans Utz Gerhard Kraetzschmar University of Ulm, James-Franck-Ring, 89069 Ulm, Germany Abstract The ability to autonomously adapt

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS

STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS XVI CONGRESO INTERNACIONAL DE INGENIERÍA GRÁFICA STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS BARONE, Sandro; BRUNO, Andrea University of Pisa Dipartimento di Ingegneria

More information

Think about film & lighting

Think about film & lighting Quiz 3: Textures Camera and Lighting for Animation Hand back Quiz 2 Amy Gooch CS 395: Intro to Animation Summer 2004 Think about film & lighting Reality! Cameras & Viewpoint Lighting in animation Same

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information