Handy Rangefinder for Active Robot Vision
|
|
- Sibyl Antonia Hall
- 5 years ago
- Views:
Transcription
1 Handy Rangefinder for Active Robot Vision Kazuyuki Hattori Yukio Sato Department of Electrical and Computer Engineering Nagoya Institute of Technology Showa, Nagoya 466, Japan Abstract A compact and high-speed rangefinding system which is applicable to active robot vision is described. The principle of depth measurement is based on the space encoding method. Spatial coded patterns of light are generated by a semiconductor laser. A slit-ray of the laser is scanned by a rotating polygonal mirror and switched according to temporal switching patterns. Each pattern of light is generated, and its reflected image is taken by a CCD camera, in one image frame with no lag time. A 512 x 256 range map is obtained continuously every 0.2 seconds. I. Introduction An active robot vision system collects a lot of intensity images for a 3-D object, shifting its viewpoint and view direction in the working space. If the system equips a rangefinder instead of an ordinary video camera, it can collect direct 3-D shape information about the object taking 2 1/2-D range maps from various viewpoints. The information that the rangefinder can obtain is including a position and pose in 3D space as well as the shape for the object. Such a rangefinder for an active robot vision requires it to be handy and instantaneous. That is, it should be small and light enough to be installed at the tip of a manipulator, and the exposure time to get range maps should be short enough for the robot to work rapidly. Many rangefinding methods have been proposed, but the active illumination with a slit-ray scanning is one of the most practical methods to get depths of 3-D objects. If we do not use ordinary video cameras but special sensing devices for rangefinding [l] that has parallel range sensors with independent memories, we can get a range map within 1 millisecond. However, so far, the resolution is not highly satisfactory (32 x 28 rangepic [2]), and it has not been commercially issued yet. Using a video camera, as an image input device, is convenient even if it takes longer time to capture images because once some favorable thresholding technique is applied to intensity images, reflections of the illumination can be extracted stably, regardless of the reflection property of the object surface. Furthermore, we can simultaneously analyze both an intensity image and a range map obtained by a single camera. The space encoding technique is one of the best methods to obtain range maps if an ordinary CCD camera is used as an image input device. In the space encoding method, the measurement time mostly depends on the light pattern generation tec:hnique. There have been many attempts to generate pattems of light [3j - [7] but using liquid crystal shutters rnust be the most practical and successful means [3][6]. The istripes of liquid crystal shutters which are installed in a slide projector are switched independently and yield pattems of light rapidly. The piroblem with such a type of light pattern generation lies on the heat which the projector emits, as well as the size and the weight of the projector. A semiconductor laser must be ithe best ffor the illumination source because it involves an extra high optical power with less heat, even though it is extremely small and light. Furthermore, the light can be switched in a snap, as it is well-known as an optical communication device. Such advantages of the semiconductor lead us to realize a handy and instantaneous rangefinder. We have proposed a new type of rangefinding system called the Cubicscope [81. The Cubicscope equips synchronously controlled a CCD camera, a semiconductor laser, and a scalnning mirror and obtains range images based on the space encoding method. In this paper, we have proposed an improved practical prototype that installs a polygon scanning mirror and a specially coordinated hardware system. Such improvements have come to make us realized the Cubicscope is able to capture preferable range images successively and quickly at less than 0.2 seconds per frame. II. Rangefinding by Space Encoding Method In the space encoding method, the illumination space is divided into wedge-like regions, which are equivalent to planes of a slit-ray. Each region is encoded by binary numbers with illluminating light patterns. (For more details, see reference [8j). For each pixel, the slit-ray is decoded from plural coded images, and the depth of the corresponding point on the object surface is calculated based on the triangular formula. Such depth calculations for every pixel on the entire image plane completes the range image for the scene. As 'we see, only N coded image frames are necessary to identify 2N - B slit-rays; this reduces the number of image IEEE International Conference on Robotics and Automation /95 $ IEEE
2 frames to be taken and sharply saves exposure time. It is apparent that the highest exposure speed will be attained if the pattern of light is generated within the period of oneframe, the camera captures the reflection image simultaneously, and no lag time is wasted between image frames. In our rangefinder, the beam of the semiconductor laser is expanded vertically to form a slit-ray, and it is scanned horizontally by a scanning mirror, e.g., a galvano mirror [8]. As we scan the slit-ray and switch the semiconductor laser to a high frequency (generally higher than hundreds khz) during one video field, we can generate an arbitrary stripe pattern of light by temporal signal switching. The CCD camera grabs this stripe pattern in the first video field and outputs in the next video field. If we use fixed spatial pattern shutters like in [6], it is not easy to adaptively modify the stripe patterns to object size, image range, or Fig. 1 polygon mirror (1 2 faces) semiconductor & Light pattern generation with a polygon mirror stripe pitches as well. However, the computer controlled temporal switching easily generates arbitrary spatial stripe patterns. And furthermore, the focusing is improved by using the laser, instead of a slide projector. Obviously, all the signals have to be controlled synchronously to generate patterns: the laser switching signal, the angle control signal of scanning mirror, and the vertical synchronous signal of the CCD camera. This scanning and switching technique potentially allows the most rapid pattern generation of light; a pattern of light is generated, and the reflected image is taken within one image frame with no lag time. As mentioned above, we used a galvano mirror for scanning the laser slit-ray. A galvano mirror can be simply controlled by analog signal, but due to the nonlinearity between the mirror angle and signal, a precise calibration is needed to acquire accurate slit-ray planar equations. A polygon mirror must be the preference to adopt for the laser scan because of the stably controlled scanning speed and the wide scan angle. Therefore, the slit-ray projection angle can be accurately derived for the entire scan range. Fig.1 shows the construction of the pattern illumination with a polygon mirror. However, the polygon mirror control is not so stable when the rotating speed is not sufficiently high, e.g., 1500 rpm or more. This speed is too fast to yield a light pattern within 1/60 seconds; 200 rpm must be suitable with an 18- face polygon mirror for a 40 degree view range. The higher scan produces the less reflected intensities on the image. To solve this problem, we project an identical light pattern many times with successive mirror faces during the image field. This operation results in obtaining higher intensity images than that with a single projection. This pattern generation is achieved by severely synchronizing control signals. In our system, the polygonal mirror has 12 faces, and it is driven by a servo motor at 1800 rpm. Six mirror faces passes during a video field. The rotation speed and phase of the mirror must be controlled synchronously with the video signal. Fig. 2 shows the timing of each control signals to generate the pattern light using this polygon mirror. Vertical (I/[. SYNC U I 6 scans mirror If I signa, camera output U U u u uuu U uuuuuuu U uu U U U uuuuuuuu U U uuuuu UUUUL - Fig. 2 Control signals Fig.3 Cubicscope
3 111. Configuration of Rangefinder Cubicscope Our rangefinder is indicated in Fig. 3. It involves a 2/3 inch CCD camera, 30 mw semiconductor laser, and a polygon mirror installed a servo motor. The size is small (W: 180 mm, D: 85 mm, H: 55 mm) and the weight is light (950 g). This rangefinder is so compact that it can be installed at the tip of a robot manipulator and is controlled in the 3-D space to quickly change its view point or direction. The CCD camera equips a optical filter which cuts off the light except for red wave length. Since the semiconductor laser used here is red (690 nm), the camera stably detects the reflections of the laser in an ordinary luminous circumstance. PO ygon (.9?p semiconductor Drive main controller 1 (NEC PC-9801) Fig. 4 Block diagram of controller I converter, a thresholding circuit, and frame buffers and produces trigger pulses at every 1/60 sec to the computer and the camera controller, and (0 computer (NEC PC-9801) that observes signal timing of the whole system. Fig. 2 shows the timing chart of the signals of respective units. As Fig. 4 schematically represents the block diagram of the control circuit.. The circuit consists of six major units as follows: (a) mirror driver that yields motor drive pulses, (b) laser driver which supplies power with a switch input, (c) drive controller that receives control timing and generates drive signals for the mirror and the laser, (d) camera controller that generates all synchronous signals for the camera, (e) image decoder that involves an image A/D indicated in the figure, while the camera shutter is open, the slit-ray radiates the scene in first video field (1/60 sec), and the camera transmits the video signal in the next video field (1/60 sec), In the image decoder, input images are transformed to slit-images with some thresholding technique. The intensity levels in the blnary image must stably indicate whether the slit-ray reflects onto the image plane. It is easy to use some fixed threshold level for the whole image to binarize input images, but the thresholded images are vulnerable to the reflection property of the object surface. In our rangefinder, the system is designed to adjust the threshold levels adaptively for every pixel. We can apply three methods with our system: (a) fixed threshold method, (b) averaged threshold methlod, and (c) complementary pattern method. The measuremlent time using each method are (a) 8/60 sec, (b) 10/60 sec, (c) 16/60 sec, These times depends on the number of necessary pattern images. For more details about these method, SI= reference [8]. video signal from camera 0 frame memory 8-bit data bus + I-bit data bus Fig.5 Block diagram of image decoder 1425
4 The complementary method is the best method to obtain invariable binary images although it takes a long time to expose. And the averaged threshold method might be practical to adopt for usual industrial use. In order to apply all of these thresholding methods, the image decoder equips an A/D converter, two 8-bit frame buffers, and a comparator, as shown in the image decoder of Fig. 5. We must calculate the range value on each pixel from the space coded image. The range value is obtained by a triangular formula. If the major axis of the laser slit-ray and the y axis of the camera's image plane are precisely parallel each other, the range value z on each pixel is calculated by the following equation: J.1 Z= x + J tan 8 (1 1 where x is horizontal coordinate of the pixel in the image plane and 8 is light projection angle. Obviously, focal lengthfand the distance between the camera and the mirror I are the optical parameters previously determined before measurement by using some calibration technique. As shown by this equation (l), the range calculation of each pixel is the floating point processing. It needs a lot of computational power if we get a range map in real time when using software calculation. To resolve this problem, we developed special hardware which achieves real time range map generation using a hardware calculation. Equation (1) has two discrete variables, x and 8. Parametersf and I are calibrated constant values. Depths of the range image are obtained by striking out the look-up-table with two inputs, x and 8. Since angle 8 is determined by the space code and coordinate x is derived by the horizontal coordinate of each pixel on the frame memory. The range generator indicated in Fig.5 makes above look-up-table process. The sequence of this process is follows : the binary pattern images are copied to the space code buffer. The space code each pixel is read out one by one. X coordinate corresponding to the space code is generated by the X-coord generator. Both space code and x coordinate are input to the range table buffer and images depths are struck out. We can proceed this sequence in parallel with obtaining and binarizing pattern images. It takes 0.2 seconds or less for getting range values. IV. Experimental Results and Conclusions In Fig. 6, the pattern projected images for wooden blocks are shown. The gray code is used for the pattern encoding. Fig. 7 shows the range map for the measured object, where the brighter intensity is depicted closer to the camera. The resolution of the range map is 512 x 256, and the relative accuracy for the depth measurement is less than 1 %, which is comparable with other measurement strategies based on the triangulation formula. In Fig. 8, some other experimental results are presented. We should note about the lens distortion when describing measurement accuracy. The lens we used here is a wide angle one, and it has at the most 3 % distortion around the edge part of the obtained image. This distortion obviously badly influences the measurement accuracy. Therefore, we calibrated this lens distortion using distortion parameters supplied from the lens maker. Fig. 6 Projected pattern images Fig. 7 Obtained range map
5 (a) doll (left: intensity image, right: range image) (b) pile of electric parts (left: intensity image, right:. range image) Fig.8 Measurement result If the objects in the scene to be observed or the rangefinder itself has movement, the measurement error become very large because our rangefinder takes at least 8/60 second to obtain complete pattern images which is necessary to achieve one space encoding measurement. This measurement time is short enough to obtain static scene s range maps or very little movement of objects. On the other hand, in the scene which has movement and vibration of the objects to be ob:served or rangefinder to observe, for example mobile application, the measurement error would become large. In many robotics applications, however, the short stop of the rangefinder is permitted. The measurement time 8/60 sec is very short for such stop of robotics, and then our rangefinder Can achieve accurate space encoding in the
6 very short stop period. In such applications, we believe that our rangefinder has the capability to be a vision sensor. As we have mentioned in this paper, our rangefinder, the Cubicscope, is downsized and instantaneous to get range maps with new image processing hardware. High-resolution range maps (512 x 256 rangepic) are captured within 0.2 seconds with 1% accuracy. It is practically compact enough and it can be applied to an active robot vision, in which it recognizes the 3-D scene taking 2 1/2-D range maps and 2-D intensity images from several view points or directions. Acknowledgements The authors would thank Mr. Shibata and engineers with CKD Co. Ltd. for their contribution in developing the special image processing hardware. References [l] K. Araki, Y. Sato, and S. Parthasarathy, "High Speed Rangefinder," SPIE, vol. 850, Optics, Illumination, and Image Sensing for Machine Vision, pp , 1987 [2] T. Kanade, A. Gruss, and L. Richard Carley, "A Very Fast VLSI Rangefinder," Proc. of 1991 IEEE Int. Conf. Robotics and Automation, pp , 1991 [3] Alexander, B.F. and Ng, K.C, "3-D Shape Measurement by Active Triangulation using an Array of Coded Light Stripes," SPIE Proceedings Vo1.850, Optics, Illumination, Image Sensing for Machine Vision 11, pp , 1987 [4] J. Peter Rosenfeld and Constantine J. Tsikos, "High-speed encoding projector for 3D imaging," SPIE, vo1.728, Optics, Illumination, and Image Sensing for Machine Vision, pp , 1986 [5] M. Matsuki and T. Ueda, " A Real-Time Sectional Image Measuring System Using Time Sequentially Coded Grating Method," IEEE Trans. PAMI, vol. 11, no. 11, pp , 1989 [6] K. Sat0 and S. Inokuchi, "Range-imaging system utilizing memetic liquid crystal mask," IEEE, 1st ICCV, pp , 1987 [7] J. L. Posdamer and M. D. Altschuler, "Surface Measurement by Space-encoded Projected Beam Systems," Computer Graphics and Image Processing, 18, 1982 [8] Y. Sato, K. Hattori, M. Otsuki, "Real-Time Handy Rangefinder Cubicscope," Proc. of 3rd Int. Conf. on Automation, Robotics, and Computer Vision (ICARCV '94), pp ,
A High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More informationHIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS
HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS Takeo MIYASAKA, Kazuhiro KURODA, Makoto HIROSE and Kazuo ARAKI School of Computer and Cognitive Sciences,
More informationDEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD
DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,
More informationStructured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)
Structured Light II Guido Gerig CS 6320, Spring 2013 (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/lec-17.ppt Variant
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationStructured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based
More informationGesture Recognition using Temporal Templates with disparity information
8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University
More informationA 100Hz Real-time Sensing System of Textured Range Images
A 100Hz Real-time Sensing System of Textured Range Images Hidetoshi Ishiyama Course of Precision Engineering School of Science and Engineering Chuo University 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551,
More information3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationOptical Imaging Techniques and Applications
Optical Imaging Techniques and Applications Jason Geng, Ph.D. Vice President IEEE Intelligent Transportation Systems Society jason.geng@ieee.org Outline Structured light 3D surface imaging concept Classification
More informationAgenda. DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK
Agenda DLP 3D scanning Introduction DLP 3D scanning SDK Introduction Advance features for existing SDK Increasing scanning speed from 20Hz to 400Hz Improve the lost point cloud 3D Machine Vision Applications:
More informationA 200Hz Small Range Image Sensor Using a Multi-Spot Laser Projector
A 200Hz Small Range Image Sensor Using a Multi-Spot Laser Projector Masateru Tateishi, Hidetoshi Ishiyama and Kazunori Umeda Abstract In this paper, a high-speed range image sensor using a multi-spot laser
More informationCHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object
CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More informationTransparent Object Shape Measurement Based on Deflectometry
Proceedings Transparent Object Shape Measurement Based on Deflectometry Zhichao Hao and Yuankun Liu * Opto-Electronics Department, Sichuan University, Chengdu 610065, China; 2016222055148@stu.scu.edu.cn
More informationAn Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting
An Innovative Three-dimensional Profilometer for Surface Profile Measurement Using Digital Fringe Projection and Phase Shifting Liang-Chia Chen 1, Shien-Han Tsai 1 and Kuang-Chao Fan 2 1 Institute of Automation
More informationTutorial: Instantaneous Measurement of M 2 Beam Propagation Ratio in Real-Time
Tutorial: Instantaneous Measurement of M 2 Beam Propagation Ratio in Real-Time By Allen M. Cary, Jeffrey L. Guttman, Razvan Chirita, Derrick W. Peterman, Photon Inc A new instrument design allows the M
More informationDevelopment of a video-rate range finder using dynamic threshold method for characteristic point detection
Engineering Industrial & Management Engineering fields Year 1999 Development of a video-rate range finder using dynamic threshold method for characteristic point detection Yutaka Tanaka Nobuo Takeda Akio
More informationDynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model
Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Liang-Chia Chen and Xuan-Loc Nguyen Graduate Institute of Automation Technology National Taipei University
More informationRegistration of Moving Surfaces by Means of One-Shot Laser Projection
Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,
More informationL2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods
L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2
More informationStereo and structured light
Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure
More informationAuto-focusing Technique in a Projector-Camera System
2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School
More informationEASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS
EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS Tatsuya Hanayama 1 Shota Kiyota 1 Ryo Furukawa 3 Hiroshi Kawasaki 1 1 Faculty of Engineering, Kagoshima
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationCh 22 Inspection Technologies
Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface
More informationMulti-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo
Vision Sensing Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo The Digital Michelangelo Project, Stanford How to sense 3D very accurately? How to sense
More informationSensor technology for mobile robots
Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.uni-hamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance
More informationAvailable online at ScienceDirect. Energy Procedia 69 (2015 )
Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationStructured light , , Computational Photography Fall 2017, Lecture 27
Structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 27 Course announcements Homework 5 has been graded. - Mean: 129. - Median:
More informationLab 5: Diffraction and Interference
Lab 5: Diffraction and Interference Light is a wave, an electromagnetic wave, and under the proper circumstances, it exhibits wave phenomena, such as constructive and destructive interference. The wavelength
More informationATRANSPUTER BASED LASER SCANNING SYSTEM
ATRANSPUTER BASED LASER SCANNING SYSTEM Mr G TIZWKESBURY, Dr D J HARRISON, Dr D A SANDERS, Prof J. BILLINGSLEY, Dr J E L Hollis Automation & Robotics Research Laboratory, School of Systems Engineering,
More informationOptical Active 3D Scanning. Gianpaolo Palma
Optical Active 3D Scanning Gianpaolo Palma 3D Scanning Taxonomy SHAPE ACQUISTION CONTACT NO-CONTACT NO DESTRUCTIVE DESTRUCTIVE X-RAY MAGNETIC OPTICAL ACOUSTIC CMM ROBOTIC GANTRY SLICING ACTIVE PASSIVE
More informationA Coded Structured Light Projection Method for High-Frame-Rate 3D Image Acquisition
A Coded Structured Light Projection Method for High-Frame-Rate D Image Acquisition 0 Idaku Ishii Hiroshima University Japan. Introduction Three-dimensional measurement technology has recently been used
More informationDEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER
S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,
More informationLab2: Single Photon Interference
Lab2: Single Photon Interference Xiaoshu Chen* Department of Mechanical Engineering, University of Rochester, NY, 14623 ABSTRACT The wave-particle duality of light was verified by multi and single photon
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationLocating 1-D Bar Codes in DCT-Domain
Edith Cowan University Research Online ECU Publications Pre. 2011 2006 Locating 1-D Bar Codes in DCT-Domain Alexander Tropf Edith Cowan University Douglas Chai Edith Cowan University 10.1109/ICASSP.2006.1660449
More informationA three-step system calibration procedure with error compensation for 3D shape measurement
January 10, 2010 / Vol. 8, No. 1 / CHINESE OPTICS LETTERS 33 A three-step system calibration procedure with error compensation for 3D shape measurement Haihua Cui ( ), Wenhe Liao ( ), Xiaosheng Cheng (
More informationF150-3 VISION SENSOR
F50-3 VISION SENSOR Simplified Two-Camera Machine Vision Omron s compact F50-3 lets you handle a wide variety of single-camera and now twocamera applications with simple on-screen setup. The two-camera
More informationIndoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner
AARMS Vol. 15, No. 1 (2016) 51 59. Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner Peter KUCSERA 1 Thanks to the developing sensor technology in mobile robot navigation
More informationBasilio Bona DAUIN Politecnico di Torino
ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The
More informationENGN D Photography / Spring 2018 / SYLLABUS
ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely
More informationConstruction of an Active Triangulation 3D Scanner for Testing a Line Following Strategy
Construction of an Active Triangulation 3D Scanner for Testing a Line Following Strategy Tibor Kovács Department of Automation and Applied Informatics Budapest University of Technology and Economics Goldmann
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationActive Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth
Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different
More informationAn Intuitive Explanation of Fourier Theory
An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory
More informationIntroduction to 3D Machine Vision
Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation
More informationENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning
ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 26 February 2009 Figure 1: Structured
More informationDAMAGE INSPECTION AND EVALUATION IN THE WHOLE VIEW FIELD USING LASER
DAMAGE INSPECTION AND EVALUATION IN THE WHOLE VIEW FIELD USING LASER A. Kato and T. A. Moe Department of Mechanical Engineering Chubu University Kasugai, Aichi 487-8501, Japan ABSTRACT In this study, we
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationComputer Vision. 3D acquisition
è Computer 3D acquisition Acknowledgement Courtesy of Prof. Luc Van Gool 3D acquisition taxonomy s image cannot currently be displayed. 3D acquisition methods Thi passive active uni-directional multi-directional
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationStudy on Gear Chamfering Method based on Vision Measurement
International Conference on Informatization in Education, Management and Business (IEMB 2015) Study on Gear Chamfering Method based on Vision Measurement Jun Sun College of Civil Engineering and Architecture,
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More information3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa
3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple
More informationSurround Structured Lighting for Full Object Scanning
Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work
More informationAn Auto-Stereoscopic VRML Viewer for 3D Data on the World Wide Web
An Auto-Stereoscopic VM Viewer for 3D Data on the World Wide Web Shinji Uchiyama, Hiroyuki Yamamoto, and Hideyuki Tamura Mixed eality Systems aboratory Inc. 6-145 Hanasakicho, Nishi-ku, Yokohama 220-0022,
More informationV530-R150E-3, EP-3. Intelligent Light Source and a Twocamera. Respond to a Wide Variety of Applications. 2-Dimensional Code Reader (Fixed Type)
2-Dimensional Code Reader (Fixed Type) Intelligent Light Source and a Twocamera Unit Respond to a Wide Variety of Applications Features Intelligent Light Source Versatile lighting control and a dome shape
More information3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT
3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru
More informationDepartment of Photonics, NCTU, Hsinchu 300, Taiwan. Applied Electromagnetic Res. Inst., NICT, Koganei, Tokyo, Japan
A Calibrating Method for Projected-Type Auto-Stereoscopic 3D Display System with DDHOE Ping-Yen Chou 1, Ryutaro Oi 2, Koki Wakunami 2, Kenji Yamamoto 2, Yasuyuki Ichihashi 2, Makoto Okui 2, Jackin Boaz
More informationNew Approach in Non- Contact 3D Free Form Scanning
New Approach in Non- Contact 3D Free Form Scanning Contents Abstract Industry Trends The solution A smart laser scanning system Implementation of the laser scanning probe in parts inspection Conclusion
More informationLaser Eye a new 3D sensor for active vision
Laser Eye a new 3D sensor for active vision Piotr Jasiobedzki1, Michael Jenkin2, Evangelos Milios2' Brian Down1, John Tsotsos1, Todd Campbell3 1 Dept. of Computer Science, University of Toronto Toronto,
More information3D Scanning Method for Fast Motion using Single Grid Pattern with Coarse-to-fine Technique
3D Scanning Method for Fast Motion using Single Grid Pattern with Coarse-to-fine Technique Ryo Furukawa Faculty of information sciences, Hiroshima City University, Japan ryo-f@cs.hiroshima-cu.ac.jp Hiroshi
More informationDevelopment of Vision System on Humanoid Robot HRP-2
Development of Vision System on Humanoid Robot HRP-2 Yutaro Fukase Institute of Technology, Shimizu Corporation, Japan fukase@shimz.co.jp Junichiro Maeda Institute of Technology, Shimizu Corporation, Japan
More informationProc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992
Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,
More informationAUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER
AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationLumaxis, Sunset Hills Rd., Ste. 106, Reston, VA 20190
White Paper High Performance Projection Engines for 3D Metrology Systems www.lumaxis.net Lumaxis, 11495 Sunset Hills Rd., Ste. 106, Reston, VA 20190 Introduction 3D optical metrology using structured light
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationAdvanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation
Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationMetrology and Sensing
Metrology and Sensing Lecture 4: Fringe projection 2016-11-08 Herbert Gross Winter term 2016 www.iap.uni-jena.de 2 Preliminary Schedule No Date Subject Detailed Content 1 18.10. Introduction Introduction,
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationSensing Deforming and Moving Objects with Commercial Off the Shelf Hardware
Sensing Deforming and Moving Objects with Commercial Off the Shelf Hardware This work supported by: Philip Fong Florian Buron Stanford University Motivational Applications Human tissue modeling for surgical
More informationThe goal of this lab is to give you a chance to align and use a Pockel s Cell.
880 Quantum Electronics Lab Pockel s Cell Alignment And Use The goal of this lab is to give you a chance to align and use a Pockel s Cell. You may not take this lab unless you have read the laser safety
More informationMeasurement of 3D Foot Shape Deformation in Motion
Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The
More information3D graphics, raster and colors CS312 Fall 2010
Computer Graphics 3D graphics, raster and colors CS312 Fall 2010 Shift in CG Application Markets 1989-2000 2000 1989 3D Graphics Object description 3D graphics model Visualization 2D projection that simulates
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationMulti-view reconstruction for projector camera systems based on bundle adjustment
Multi-view reconstruction for projector camera systems based on bundle adjustment Ryo Furuakwa, Faculty of Information Sciences, Hiroshima City Univ., Japan, ryo-f@hiroshima-cu.ac.jp Kenji Inose, Hiroshi
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationHigh spatial resolution measurement of volume holographic gratings
High spatial resolution measurement of volume holographic gratings Gregory J. Steckman, Frank Havermeyer Ondax, Inc., 8 E. Duarte Rd., Monrovia, CA, USA 9116 ABSTRACT The conventional approach for measuring
More informationDr. Larry J. Paxton Johns Hopkins University Applied Physics Laboratory Laurel, MD (301) (301) fax
Dr. Larry J. Paxton Johns Hopkins University Applied Physics Laboratory Laurel, MD 20723 (301) 953-6871 (301) 953-6670 fax Understand the instrument. Be able to convert measured counts/pixel on-orbit into
More informationRobot vision review. Martin Jagersand
Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders
More informationReprint. from the Journal. of the SID
A 23-in. full-panel-resolution autostereoscopic LCD with a novel directional backlight system Akinori Hayashi (SID Member) Tomohiro Kometani Akira Sakai (SID Member) Hiroshi Ito Abstract An autostereoscopic
More informationGraphics Systems and Models
Graphics Systems and Models 2 nd Week, 2007 Sun-Jeong Kim Five major elements Input device Processor Memory Frame buffer Output device Graphics System A Graphics System 2 Input Devices Most graphics systems
More informationDevelopment of real-time motion capture system for 3D on-line games linked with virtual character
Development of real-time motion capture system for 3D on-line games linked with virtual character Jong Hyeong Kim *a, Young Kee Ryu b, Hyung Suck Cho c a Seoul National Univ. of Tech., 172 Gongneung-dong,
More informationDynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry
Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Lei Huang,* Chi Seng Ng, and Anand Krishna Asundi School of Mechanical and Aerospace Engineering, Nanyang Technological
More informationVirtual Photometric Environment using Projector
Virtual Photometric Environment using Projector Yasuhiro Mukaigawa 1 Masashi Nishiyama 2 Takeshi Shakunaga Okayama University, Tsushima naka 3-1-1, Okayama, 700-8530, Japan mukaigaw@ieee.org Abstract In
More information3D Reconstruction of a Human Body from Multiple Viewpoints
3D Reconstruction of a Human Body from Multiple Viewpoints Koichiro Yamauchi, Hideto Kameshima, Hideo Saito, and Yukio Sato Graduate School of Science and Technology, Keio University Yokohama 223-8522,
More informationPhotoshop PSD Export. Basic Tab. Click here to expand Table of Contents... Basic Tab Additional Shading Tab Material Tab Motion Tab Geometry Tab
Photoshop PSD Export Click here to expand Table of Contents... Basic Tab Additional Shading Tab Material Tab Motion Tab Geometry Tab The Photoshop PSD Export image filter is an image saver masquerading
More informationROBOT SENSORS. 1. Proprioceptors
ROBOT SENSORS Since the action capability is physically interacting with the environment, two types of sensors have to be used in any robotic system: - proprioceptors for the measurement of the robot s
More informationLecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19
Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationAUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY
AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY G. Takahashi a, H. Takeda a, Y. Shimano a a Spatial Information Division, Kokusai Kogyo Co., Ltd., Tokyo, Japan - (genki_takahashi, hiroshi1_takeda,
More informationMEASUREMENT OF WIGNER DISTRIBUTION FUNCTION FOR BEAM CHARACTERIZATION OF FELs*
MEASUREMENT OF WIGNER DISTRIBUTION FUNCTION FOR BEAM CHARACTERIZATION OF FELs* T. Mey #, B. Schäfer and K. Mann, Laser-Laboratorium e.v., Göttingen, Germany B. Keitel, S. Kreis, M. Kuhlmann, E. Plönjes
More information