Color and Range Sensing for Hypermedia and Interactivity in Museums

Similar documents
Overview of Active Vision Techniques

Other approaches to obtaining 3D structure

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

MODERN DIMENSIONAL MEASURING TECHNIQUES BASED ON OPTICAL PRINCIPLES

Optimizing Monocular Cues for Depth Estimation from Indoor Images

3D BUILDINGS MODELLING BASED ON A COMBINATION OF TECHNIQUES AND METHODOLOGIES

3D Modeling of Objects Using Laser Scanning

Il colore: acquisizione e visualizzazione. Lezione 17: 11 Maggio 2012

3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Altering Height Data by Using Natural Logarithm as 3D Modelling Function for Reverse Engineering Application

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Three-Dimensional Scanning and Graphic Processing System

Using Web Camera Technology to Monitor Steel Construction

HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD

Il colore: acquisizione e visualizzazione. Lezione 20: 11 Maggio 2011

Understanding Variability

Digital Image Processing COSC 6380/4393

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008

Natural Viewing 3D Display

Range Sensors (time of flight) (1)

Mu lt i s p e c t r a l

A High Speed Face Measurement System

Laser Eye a new 3D sensor for active vision

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Visualisatie BMT. Rendering. Arjan Kok

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Measurements using three-dimensional product imaging

A Robot Recognizing Everyday Objects

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

Application of a 3D Laser Inspection Method for Surface Corrosion on a Spherical Pressure Vessel

Specifications: LED Ring Lights. Product Number

Operating Procedure for Horiba Raman Microscope

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Introduction to 3D Concepts

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

New Sony DepthSense TM ToF Technology

DAMAGE INSPECTION AND EVALUATION IN THE WHOLE VIEW FIELD USING LASER

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation

Unwrapping of Urban Surface Models

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40

3D graphics, raster and colors CS312 Fall 2010

SIMULATION OF A TIRE INSPECTION SYSTEM

The Paper Project Guide to Confocal Imaging at Home & in the Classroom

Coordinate Measuring Machines with Computed Tomography

Graphics for VEs. Ruth Aylett

Graphics for VEs. Ruth Aylett

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

X-3DVISION TUBE SURFACE INSPECTION of Tubes

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources

Epipolar geometry contd.

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

THREE DIMENSIONAL ACQUISITION OF COLORED OBJECTS N. Schön, P. Gall, G. Häusler

METIS METIS DRS 2000 DCS

Introduction to Computer Graphics with WebGL

A METHOD OF 3D MEASUREMENT AND RECONSTRUCTION FOR CULTURAL RELICS IN MUSEUMS

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Representing Graphical Data

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Presentation and analysis of multidimensional data sets

UK Industrial Vision SWIR Challenges

Computer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011

3D Object Model Acquisition from Silhouettes

A PRACTICAL ALGORITHM FOR RECONSTRUCTION FROM X-RAY

Pattern Feature Detection for Camera Calibration Using Circular Sample

Lenses for Machine Vision

Miniaturized Camera Systems for Microfactories

Visual Perception Sensors

3D from Images - Assisted Modeling, Photogrammetry. Marco Callieri ISTI-CNR, Pisa, Italy

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

VSMM September 2002 Hilton Hotel, Gyeongju, Korea. Institute for Information Technology, National Research Council Canada, Canada (2)

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

LED Ring Lights. LED Ring Lights

Computer Graphics - Chapter 1 Graphics Systems and Models

Watchmaker precision for robotic placement of automobile body parts

Ch 22 Inspection Technologies

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

AUTOMATIC ORIENTATION AND MERGING OF LASER SCANNER ACQUISITIONS THROUGH VOLUMETRIC TARGETS: PROCEDURE DESCRIPTION AND TEST RESULTS

Sensor Modalities. Sensor modality: Different modalities:

MOBILE INSPECTION SYSTEM FOR HIGH-RESOLUTION ASSESSMENT OF TUNNELS

Metadata Requirements for Digital Museum Environments

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting

f. (5.3.1) So, the higher frequency means the lower wavelength. Visible part of light spectrum covers the range of wavelengths from

UNIT 102-9: INTERFERENCE AND DIFFRACTION

Computer Graphics. Chapter 1 (Related to Introduction to Computer Graphics Using Java 2D and 3D)

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Transcription:

Color and Range Sensing for Hypermedia and Interactivity in Museums R Baribeau and J.M. Taylor Analytical Research Services Canadian Conservation Institute Department of Communications Ottawa, Canada KIA OC8 M. Rioux and G. Godin Autonomous Systems Laboratory Institute for Information Technology National Research Council Ottawa, Canada KIA OR6 Abstract Adescription is given of a laser scanning system for the acquisition of the three- dimensional surface data from objects as well as their color properties. The digitized data can be stored In compact data files that can serve for the reconstruction of any vlew of the object or scene on currently available graphic workstations. Threedimensional hard reproductions are also possible from the same data. Examples are given of applications in the museum world. Key words: Computer color vision, computer imaging, range measurements, scientlic examination. Introduction During the past few years, conservation scientists at the Canadian Conservation Institute have been working with engineers of the National Research Council Canada (NRCC) for the development of a system that would capture the 3-D shape and color appearance of museum artifacts. A prototype of the system now exists and has already been used for specific tasks in art conservation. The results obtained so far prove that the system can serve as a powerful input device in multimedia and telematic technologies (Figure I), allowing not only the passive 2-D color representation of artifacts, but also complete 3-D color reconstruction, with full depth realism and ease for the user to select any point of view and illumination conditions.

International Conference on Hypermedia & Interactivity in Museums This paper describes the system currently being developed and some of the results obtained so far. Background: 2-D and 3-D Imaging A 2-D camera essentially captures brightness measurements for each visible point in a scene. In early systems that made use of film, the mapping of points in the scene with points of the picture was continuous and so involved an infinity of points. With the advent of computer image processing, images had to be decomposed into a mosaic of small rectangular patches called "pixels" (picture elements). A 2-D digitized image is thus a mapping of brightness values to pixels. In black-and-white imaging, one grey value is assigned to each pixel while in color imaging, values of brightness in the blue, red and green are assigned to each pixel. 2-D imaging is so widely used mainly because a 2-D image is easily interpreted by humans through visual examination. One weakness of a 2-D image however is that it does not accurately define the object it represents and provides no precise dimensional or colorimetric information. For example, objects of different 3-D shapes can lead to the same image by choosing a proper perspective. Also, the same object can produce different brightness values when the lighting conditions are changed. Finally, a 2-D image provides only a static view of the scene, as it stood during acquisition; if N points of view of a scene are needed, then N images must be taken. A 3-D camera provides a means to acquire the (x,y,z) coordinates of the surface of an object or scene. If an xyz reference frame is defined with the z axis pointing toward the camera, then one can think of a 3-D image, also called "range image", as a mapping of z values to surface elements ("surfels") of lateral coordinates (x,y). The advantage of the 3-D approach is that the data provided is directly connected to the object and bears some form of device independence. For example, the diameter of a vase can be determine from its range image, irrespective of the attitude and distance of this object with respect to the recording device. For this reason, 3-D systems are finding more and more applications in the area of industrial inspection, robot vision and computer aided manufacturing. Many tools have been designed for the visualisation of 3-D data by humans. Low cost software for "wire mesh" representation is available on most low end computers. At the other extreme, graphics work stations are specially designed to synthesize at high speed stereo pairs of any view of a scene, to be observed simultaneously for full depth and color realism. Many 3-D cameras are currently available (see Besl for a survey). The performance of a system can be measured in terms of the accuracy of the data, sampling density achieved, and time for acquisition. The one presented here rates quite high according to these criteria and also offers the possibility of simultaneous color measurements. 266

Range Measurements Basically, the NRCC laser sensor works by projecting a laser beam on the object and by measuring the angle of the reflected light, as detected by a solid state device located near the laser source. The know angle of the returning light and the known projection direction serve to calculate the (XJ) position of reference laser spot on the object, with an accuracy as good as 10 micrometers. A mirror scanning arrangement (Figure 2) sweeps the laser spot in the transverse x direction while its position is being sampled at very high rate. Two acquisition modes are possible: cartesian and cylindrical. In cartesian mode, the camera is mounted on a translation stage that moves continuously in the y direction, thus allowing to raster scan the scene. In cylindrical mode, the camera is stationary, but the object lies on a rotation stage that rotates 360 degrees during measurement, allowing panoramic acquisition of objects (Figure 3). Whatever mode is selected, the time to acquire a 512 by 512 high definition range image is less than one minute. Color Reflectance Measurements The color appearance of an object depends on its spectral reflectance properties as well as on the spectral content of the source that provides illumination. For documentation, analysis and multimedia applications, it is important that the spectral reflectance alone be isolated since it is an intrinsic object property. In the system described here, the laser beam that serves for range measurements originates from a HeCd laser designed to emit simultaneously at three wavelengths in the red, green and blue parts of the spectrum. The solid state position sensor, inside the camera, also measures simultaneously the amount of light returned at each of the three wavelengths. Spectral reflectance values can be assigned to surface elements by comparing the brightness signals from these surfel to that from a white standard kept near the border of the field of view during acquisition. The calculation involves a compensation for the orientation effect: surface elements have the tendency to appear darker as the angle between the laser beam and the normal to the surface increases. This phenomena called "shading" follows a known mathematical law and can be compensated using the range data provided by the very same sensor. Image Format The laser sensor captures a collection of (x,y,z) and (R,G,B) values associated to surface elements This information can be kept in 4 compact computer files. The first file contains an ordered list of the depth values (2). The other three files contain the corresponding R, G, and B values. Because these data refer to the surface as sampled on a rectangular grid with equal spacing in the x and y directions, the (x,y) coordinates of a given surfel are easily recovered from the address of the data in the file. With the current system, each z value is digitized into 12 bits and each R, G or B value is digitized into 8 bits. For a typical 512 by 512 image, 394 kilobytes are needed for the z file, 267

International Conference on Hypermedia & Interactivity in Museums and 263 kilobytes are required for each of the three color channels. The z file thus occupies only 33% of the total data storage. This is very little price to pay considering that the z file allows the calculation of any view of the scene from any point of view and for any lighting condition. Such small volumes of data can be easily transferred through existing computer communication links. Cataloguing of huge collections of objects is also possible since after data compression, about 2000 different objects can be recorded on a standard CD-ROM. Data Display With the present system, visualisation of the data is done mostly on a IRIS graphics work station, from Silicon Graphics Inc., equipped with a stereo display option that allows depth rendering to users wearing 3-D viewing glasses. Software has been designed to ease data visualisation, and the user is given a menu of options. These can be grouped into three main categories: 0 1. Geometric display. With this option, only geometric information is displayed. Scenes can be synthesized from the z file alone into point, wire frame, and shaded forms. 0 2. Intensity mapping. This option is similar to the first one except that the sensed raw color brightness values are mapped on the corresponding screen pixels. 0 3. Color shading. Here, the brightness of each screen pixel is calculated using the recorded color reflectance data and taking into account the angles chosen by the user for the observation and source directions. For each category, the user can perform rotations (Figure 4) and translations of the object represented as well as zooms. All of these are controlled through a "dial-and-button" box, and the computer reacts in real time. A recent investigation of two paintings by canadian artist Tom Thomson provides a good example of the powerfulness of the 3-D approach. In this particular case, the task was to compare estate stamp impressions on each paintings with the metal stamp itself. Threedimensional data were recorded by the laser sensor from the metal stamp and impressions. The synthesized images of the stamp and impressions were carefully oriented and displayed together on the computer monitor, and the superimposed data could be examined for determination of any discrepancies. Image Reproduction Any synthesized view of an object, as displayed on the graphics work station, can be recorded on a 35 mm color film recorder. In the present configuration, this recorder is controlled by a personal computer, to which any image can be transferred through the local computer network. A commercial image processing software also runs on this computer and allows further image enhancement prior to recording. Solid reconstruction of 3-D ob- 268

jects can also be performed using the range data. So far, three solid reproduction techniques have been tried: numerically controlled machine milling, stereolithography and laser sintering. Accurate reproductions have been obtained in the case of a stone mask (Figure 5) collected at Tsirnshian village of Kitkatla (property of Canadian Museum of Civilisation), and a lead plaque that marked the burial place of Jesuit martyr Jean de Brebeuf. With the rapid evolution of color reproduction techniques such as laser color printing and robot painting, it should be possible in the near future to obtain accurate color and shape reproduction of objects at any scale. Museum Applications There are a variety of museum applications for the system just described. These include: 0 The basic measurement of three dimensional color artifacts and specimens for documentation and display purposes. 0 Sharing of information by telecommunications. 0 Comparison of the color and shape of an object at different periods of time, for example before and after conservation treatment, before and after a loan for exhibition, or to investigate weathering, deterioration, or changes over time. 0 Replication of objects at any scale, reconstruction of missing parts, fabrication of supports. 0 Research related to science, artists' techniques and art history. Conclusion A system for the acquisition of the shape and color of objects has been described. This technology offers some advantages compared to the more familiar 2-D imaging approach by giving the user the same flexibility with the data file that he would have with the real objects, scenes or subjects, and even more. Application of the technology have already been performed in the museum world. Literature R. Baribeau, M. Rioux and G. Godin, "Color reflectance modeling using a polychromatic laser range sensor," to be published in IEEE Trans. Pattern Anal. Mach. Intell.. P. Besl, "Active, optical range imaging sensors," Machine Viiion and Applications I, pp. 127-152,1988.

International Conference on Hypermedia & Interactivity in Museums P. Boulanger, M. Rioq J. Taylor and F. Livingstone, "Automatic replication and recording of museum artifacts," Proc. 12th Intern. Symp. on Conserv. and Restor. of Cultural Property, pp. 131-147, Tokyo, 1988. M. Rioux, "Computer acquisition and display of 3-D objects using a synchronized laser scanner," Proc. Internat. ConJ on Three Dimensional Media Technology, pp. 253-268, 1989.

DATAGRAPHY F-,Dl Scanner Storage Modem I Data ( / Storage 1 I Figure 1. Illustrntion of the system's architecture.

International Conference on Hypermedia & Interactivity in Museums FIXED \ MIRROR \ f I \ SOURCE 1 \ I Figure 2. Optical geometry of the sensor head, providing synchronization of both the projection and the detection of a the laser spot.

Figure 3. Acquisition of a stone mask with the laser sensor prototype.

International Conference on Hypermedia & Interactivity in Museums Fiyre 4. Reconstructed views of the stone mask, computed at 45 degrees intervals.

Figure 5. Hard copy of a stone mask, obtained with NC machine milling, compared with the origlnal object.