Geometric Correction of Projection Using Structured Light and Sensors Embedded Board

Size: px
Start display at page:

Download "Geometric Correction of Projection Using Structured Light and Sensors Embedded Board"

Transcription

1 2014 by IFSA Publishing, S. L. Geometric Correction of Projection Using Structured Light and Sensors Embedded Board 1 LAN Jianliang, 2 DING Youdong, 3 PENG Taile 1 School of Computer Engineering & Science, Shanghai University, Shanghai, , China 2 School of Film and TV Arts & Technology, Shanghai University, Shanghai , China 3 School of Communication & Information Engineering, Shanghai University, Shanghai, , China 1 lanjianliang@126.com Received: 3 June 2014 /Accepted: 27 June 2014 /Published: 30 June 2014 Abstract: Computer vision based projector-camera calibration approaches usually involve with complex geometry transformation and trivial operations, mostly inappropriate for scenarios that under poor illumination or interfering background. An automatic geometric correction method for projector using projection board embedded with optic sensors is proposed, aiming at simplifying the calibration operations and improving the robustness of geometric correction. By capturing the Gray-coded structured light frames, the correspondence between the points of interest on the projection board and the pixels on image plane could be obtained; furthermore, homograph matrixes are exploited to transform the images to be projected. A special one-to-many hardware design paradigm is adopted so that multiple projection boards are able to connect to one renderer node simultaneously. To further enhance the projection precision against each projection board, a self-adaptive illumination density threshold determination method is used. Copyright 2014 IFSA Publishing, S. L. Keywords: Geometric correction, Projector calibration, Structured light, Optic sensor, Embedded devices. 1. Introduction Comparing with CRT and LCD display technology, modern projectors are capable of projecting images onto various surfaces, playing an irreplaceable role in many displaying applications. Due to rapid development of the projector manufacturing technology, cheap and portable projectors are quite common nowadays, and what s more, projectors are becoming more and more important in many fields like interactive display, intelligent space and ubiquitous computing scenarios or other applications. In contrast with camera calibration, there are some special aspects on projector calibration. First, projectors can only project 2D images into 3D space, which is an irreversible process, that is to say, a projector could not be used to take photos like camera; the way to obtain the correspondences between 3D points and 2D image pixels is different from that in camera calibration. Second, most projector calibration approaches use inverse-camera model to simplify the projector calibration procedure [1, 8, 9], assuming the projector s principal point is close to the image center, while some projectors use an off-axis projection, so that the inversed-camera model may not suitable for some projectors or some special situations [2]. Using a camera to help calibrating projector is quite common in vision based approaches. A simplest projector-camera (P-C) system consists of a projector, a camera and target projecting surface. In a P-C system, camera will be calibrated first, and then be used to capture the 3D points on the surface illuminated by the projector. After that, the correspondence between 3D points and 2D image 244

2 pixels could be obtained to reconstruct the 3D-2D geometric relationship. The required manipulations are tedious for a skilled person, not to mention for an untrained terminal user. Structured light based approaches are proposed recent years to simplify the procedure by simultaneously calibrating the projector and camera in P-C system [3-5]. Audet and Okutomi [6] propose a method that uses fiducial markers to derive pre-warped patterns, and therefore complete the calibration procedure within several minutes. While the problem is, vision based algorithms could be severely affected by some factors such as poor illumination or mussy background, thus these algorithms have limitations in practical use. In those applications like interactive projection, complex surfaces projection, advertising exhibition and so on, what interests us is how to make the projected images fit with 3D surfaces, and how to make the whole procedure fast, robust and low-cost. In this sense, geometric correction is one of the most important aspects needed to be considered in projector calibration. Lee and Dietz [7] exploit the light sensors to capture the structured light and acquire the sensors location relative to projection plane and then pre-warp the image to accurately fit the target projection surfaces. This method does not require full geometric calibration and makes the whole procedure more robust. In this paper, we will focus on finding a fast, robust, automatic and low-cost geometric correction method for projectors. We leverage structured light and projection boards embedded with micro-control units and sensors. Like Lee and Dietz s method, no additional camera is needed in our implementation, and what s more, multiple target projection boards could be simultaneously calibrated within few seconds, and little manipulations are involved during the whole process. The structured light patterns illumination density threshold for each sensor is dynamically determined; therefore the position calculation precision is enhanced. 2. System Architecture Our implementation is composed of projector, renderer and projection board embedded with sensors (see Fig. 1). The projector is a usual un-calibrated commercial projector. The projection board serves as two roles, the auxiliary device for projector calibration or the target surfaces to be projected image on. This means that the projection board could help the geometric correction of projector at first, and after that, being the target projection surface. The renderer usually runs on a computer or SOC (system on chip) which provides enough computing ability for decoding the position information from the sensors and rendering the images or video Gray-Code Structured Light As an effective method for surface reconstruction, structured light method has been massively studied and enhanced in the last decades. In our implementation, structured light will be exploited not to reconstruct the target surface but to help geometry correction for projector. Fig. 1. System architecture. We use one dimension black and white stripshape Gray-code structured light to encode the projection plane s 2D coordinates, therefore two series of structured light frames are needed to cover the whole target surface, one for horizontal coordinates X, another for vertical coordinates Y. It is known that the Hamming Distance of successive two integers Gray-code is always 1, which means Graycode has the characteristic of remaining local similarities. So that, when the adjacent areas of the target surface illuminated by Gray-code structured light frames, they will receive similar light patterns, and these patterns could be decoded to similar position coordinates instead of dramatically varying ones. The structured light frames are projected in a certain order that frames indicating large-scale positions followed by frames indicating small-scale positions (Fig. 2), and for those adjacent surface regions, only the least significant bits of the projected patterns may be different. In a word, Gray-code structured light has a certain degree of fault tolerance when used to identify different regions of target projection surfaces. Fig. 2. Gray-code structured light frames. 245

3 2.2. MCU and Wireless Transport Module The devices used in our implementation include micro-control unit (MCU), wireless transport module (WTM), optic sensors and wired optic fibers. The logic relationships could be depicted as Fig. 3. MCU is an AVR ATMega328P chip running at 20 MHz, with multiple (at least 6) built-in channels of analogto-digital converter (ADC), therefore multiple independent channels of light signals could be simultaneously processed. We adopt nrf24l01 as the WTM, in favor of its ability of read/write multiple pipes of data concurrently. And what s more, nrf24l01 could be extended by cascading them in the logical layers, thus massive sensors are able to join in the network, which is a very important characteristic in our implementation. The communication between the projection board and the renderer is done through two kinds of WTMs; one kind contains one and only WTM at the renderer side as the server node, and the other kind on the board may be many of WTMs, as the client nodes. Fig. 3. Logic relationship among MCU, sensors, WTM and renderer Projection Board How to precisely segment the Gray-code structured light strips and extract the edge position is the key to vision-based algorithms, while this is usually rather complicated because of the difficulty of locating strips and identifying those vague boundaries. We leverage optic sensors to overcome those difficulties. Optic sensors are sensitive to lightness changing which is considered as analog signals to be digitized. When the sensors are wired with optic fiber whose diameter is only 1 millimeter, light signals are transmitted from the optic fiber tip which is embedded in the projection board, to the sensors themselves, and then to the MCU attached on the board. The tiny optic fiber tips are buried in the board therefore imperceptible to human eyes, and they won t affect the quality of images projected onto the board. The projection boards used in our implementation could be of any shapes or materials, like rectangle acrylic board or square wooden board. The optic fiber tips are buried in the four corners of the board which considered as four points of interest (POI). As mentioned before, we use black and white structured light to illuminate the projection board, thus the optic fiber tips on the corners will receive signals corresponding to the location in the 2D projection plane. MCU attached on the board will translate the location signals received by optic sensors to projection plane coordinates, and these coordinates will be sent to renderer via wireless transport module. The program running on MCU is working under a so-called passive mode, that is to say, MCU responds only when received valid instructions from the renderer. There are two types of instructions for now, illumination detection (ID) and coordinates detection (CD). The former one intends to determine the proper threshold of digitalizing the analog signal of illumination intensity. For each light sensor, the illumination intensity results from the superimposition of varying structured light patterns and the ambient light, therefore it s necessary to calibrate the illumination threshold for decoding right bits of structured light patterns. For instance, when the MCU received an ID instruction, it will sample the illumination intensities from i-th sensor with respect to all-white frame and all-black frame, namely IW and IB, and do an average to get the threshold When the MCU received a CD instruction, it will begin the procedure of detecting coordinates. For the i-th sensor, the illumination intensities sampled from the structure light frame patterns, indicated as, will be compared with to determine the final bit that compose the graycode which is to be decoded to the sensor s corresponding coordinates on the source projection image plane. In our implementation, we have 4 sensors embedded in the projection board s corners and 8 Gray-code structure light frames for each dimension, so that the above procedure could be informally described as Eq. 1 ~ Eq. 3 below. 0 i 3), (1) 1, if 0, if 0 7,0 3), 1, if 0, if 0 7,0 3), (2) (3) t 0 7,0, (4) The symbol in Eq. 3 is bitwise OR operator, and is bitwise left-shift operator. For a given i th sensor, when B i is determined, the sensor s corresponding coordinates in the projection image could be decoded through the following Gray-code decoding algorithm in the form of pseudo code. 246

4 0) 1) 1 2) If M 0 do Steps 3 ~ 5, otherwise go to Step 6 3) ^ 4) 1 5) Go to Step 2 6) Return C i as i th sensor s corresponding 2D coordinate on the projection image plane. In the above algorithm, >> is bitwise right-shift operator, and ^ is exclusive OR operator. When all the (0 i 3) are calculated, these coordinates will be sent to the render by the WTM. It s possible that there are more than one WTM attached to the projection board acting as client nodes, especially the case that two or more projection boards exist under the projection area. In this case, each client WTM should receive instructions from the same local device address while send feedbacks to different remote pipe number when communicating with the server WTM; the server WTM issues ID or CD instructions in the broadcast way to one unique device address and receives feedback results coming from client WTMs through different receiving pipes. Fig. 4. Projection boards embedded with MCU and sensors. (a) Two boards with different shapes and materials. (b) MCU: AVR ATMega328P; WTM: nrf24l01. (c) Optic fiber tips are buried in the corners of the board. (d) The installation angles of fiber tips should be as perpendicular to the board as possible Renderer and Projector In practical terms, we would not like to impose too much strict spatial constraints upon the relationship between the projector and projection surfaces, because it is a time consuming and laborious process to adjust the projector s position and orientation. In this case, the renderer is responsible for the geometric correction job. The renderer runs at the platform with enough computing ability, such as a modern PC or SOC. The development libraries mainly include OpenCV, OpenGL and Qt framework. OpenCV handles the structured light frames generation and applying transform matrixes to the projection images; OpenGL and Qt framework are in charge of displaying the rendered results to projector. Just as the name implies, the renderer renders the source images to make the projected images fit with the target projection surface and enjoyable to human eyes. To achieve this goal, the correspondence between 3D points and 2D image points should be estimated first. In our implementation, the projection boards could be considered as planes, and for each board there are 4 points of interest (POI). For the i-th POI, the 3D coordinate is denoted as,,,1, and the mapped 2D image point is denoted as C i =[x, y, 1] T, then the homograph could be expressed as:, (5) The parameter s is an arbitrary scale factor, and H is a homograph matrix. In the last section of this paper, all from the sensors have been gained. Without loss of generality, the world coordinate system could be centered on the known-size board, thus the Z of could be set to 0, thus could be rationalized to a new form Q i =[X, Y, 1], and X and Y could also be simply assumed to some value related to the board s dimension. For instance, if the projection board is a square board embedded with 4 sensors, then Q 0, Q 1, Q 2 and Q 3 could be assumed to [0, 0, 1] T, [0, 1, 1] T, [1, 1, 1] T and [1, 0, 1] T respectively. Mathematically, given four (Q i, C i ) pairs, the homograph matrix H could be solved except the scale factor s which controls the scale of the projected image on the plane. While in our implementation, the scale factor s need not to be estimated, because what want to do is pre-warping an 2D image which has an approximate proportion in shape against the projection board, to fit in the polygon defined by 0 3 ; in other words, we can use the 2D image s width (w) and height (h) to assume the 0 3 as 0,0,1, 0,, 1,,,1 and, 0,1. The homograph matrix with the scale factor s could be easily solved using Eq. 5 below., (6) The renderer will apply the sh transform matrix to the images to be projected towards the specified board, then send the rendered frame to the projector, thus get the undistorted projection on that board. This method is valid for the scenario that there are two or more projection boards in the projection area. Each projection board has the corresponding (sh) k, thus multiple board projection is also possible for the renderer. 247

5 3. Experiment Results The experiment is designed to verify the feasibility and effectiveness of our design work and algorithms. We use one projector (BenQ DLP MP525P XGA: 1024x768), two projection boards embedded with MCU, WTM and optic sensors, one PC running with the renderer. The two projection boards are randomly posed to the projector, and assuring the optic fiber tips could be illuminated by the projector. Experiments are carried out with good lightness condition (daytime) and bad lightness condition (nighttime) several times, and the two boards placements are also changed every time. We use 8-frame of gray-coded structure light patterns, therefore the maximum inaccuracy of location is ±4 pixels, and the projection quality is acceptable for human eyes. The whole time from issuing the ID and CD instructions to projecting images onto the target boards are merely about 3~4 seconds, which is much faster than the approaches that vision based and do not require any additional human operations. What s more, the board s tilt angle could be rather big (up to ±60 ) relative to the projector s optic axis, which means the boards could be posed with quite few spatial constraints. The experiment result could be seen in Fig. 5. There are two different-size boards placed at different positions and orientations. The whole geometric correction procedure is automatically carried out. The final projection quality is satisfying. When the board s tilt angle relative to the projector s optic axis getting too large, like larger than 70, the projection begins to be unstable; the images could not be projected fitting with the board s surface. The reason is the lightness signal received by the sensors will decay when tilt angle getting too large, causing wrong coordinates obtained by the renderer. This problem cannot be solved completely, but could be alleviated to some extent. Lee and Dietz [7] suggest one possible method, which is, using a light diffuser on the optic fiber tips to help bounce light into the fiber even at large tilt angles. 4. Conclusion A novel approach of geometric correction of projection with the help of structured light and optic sensors has been implemented and further studied in this paper. This approach does not require additional cameras to get the 3D-2D correspondence but rely on the optic sensors embedded in the board, and the Gray-code structured light frames. Since it is not vision based, the ambient illumination and the background condition won t be the interference problem of POI locating. We also adopt the one-tomany paradigm for wireless communication between the sensors and the renderer, thus make it possible that multiple projection boards could be located simultaneously, without costing additional time. The locating precision of projection boards or optic sensors is important to the whole process of geometric correction, and it mainly depends on many factors like the amount of structured light frames, the fabrication technology of sensors and optic fibers, the location decoding algorithms, et al. All these could be improved to achieve better locating precision which make our approach feasible and extendable. In our implementation, the optic fibers are buried in the projection boards, but it does not mean that we have to drill holes on the projection surfaces in actual use. The whole MCU, WTM and sensors could be well integrated in a small mobile module; therefore it could be attached at the designated locations of the target surface like a sticker. When the locating procedure is done, this module could be removed therefore the surfaces will remain intact. Fig. 5. (a) Two boards placed at different positions and orientations. (b) Detecting illumination thresholds when MCU received ID instructions. (c) Projecting horizontal Gray-code structured light frames towards the boards. (d) Projecting vertical Gray-code structured light frames towards the boards. (e) The pre-warped projection images after applying homograph matrixes. (f) Projecting images onto the boards without distortion. References [1]. Martynov I., Kamarainen J. K., Lensu L., Projector Calibration by Inverse Camera Calibration, in Proceedings of the 17 th Scandinavian Conference on Image Analysis, Vol. 5, 2011, pp [2]. Raskar R., Beardsley P., A Self-Correcting Projector, Computer Vision and Pattern Recognition (CVPR), Vol. 2, 2001, pp

6 [3]. Yamazaki S., Mochimaru M. Kanade T., Simultaneous Self-Calibration of A Projector and A Camera Using Structeured Light, IEEE in Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2011, pp [4]. Gao W., Wang L., and Hu Z. Y., Flexible Calibration of a Portable Structured Light System through Surface Plane, Acta Automatica Sinica, Vol. 34, Issue 11, 2008, pp [5]. Lu J, Song C. Y., Structured Light System calibration based on Gray Code Combined with Line-Shift, Journal of Optoelectronics Laser, Vol. 23, Issue 6, 2012, pp [6]. Audet S., Okutomi M., A User-Friendly Method to Geometrically Calibrate Projector-Camera Systems, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2009, pp [7]. Lee J. C., Dietz P. H, Mayes D. et. al., Automatic Projector Calibration with Embedded Light Sensors, in Proceedings of the 17 th Annual ACM Symposium on User Interface Software and Technology (UIST), 2004, pp [8]. Fernandez S., Salvi J. Planar-based Camera-Projector Calibration, in Proceedings of the International Symposium on Image and Signal Processing and Analysis (ISPA), 2011, pp [9]. Hurtos T., Falcao G., and Massich J., Plane-based calibration of a projector camera system, VIBOT master, Vol. 1, 2008, pp Copyright, International Frequency Sensor Association (IFSA) Publishing, S. L. All rights reserved. ( 249

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Accurate projector calibration method by using an optical coaxial camera

Accurate projector calibration method by using an optical coaxial camera Accurate projector calibration method by using an optical coaxial camera Shujun Huang, 1 Lili Xie, 1 Zhangying Wang, 1 Zonghua Zhang, 1,3, * Feng Gao, 2 and Xiangqian Jiang 2 1 School of Mechanical Engineering,

More information

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS Tatsuya Hanayama 1 Shota Kiyota 1 Ryo Furukawa 3 Hiroshi Kawasaki 1 1 Faculty of Engineering, Kagoshima

More information

Pattern Feature Detection for Camera Calibration Using Circular Sample

Pattern Feature Detection for Camera Calibration Using Circular Sample Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon [Paper Seminar 7] CVPR2003, Vol.1, pp.195-202 High-Accuracy Stereo Depth Maps Using Structured Light Daniel Scharstein Middlebury College Richard Szeliski Microsoft Research 2012. 05. 30. Yeojin Yoon Introduction

More information

Virtual Interaction System Based on Optical Capture

Virtual Interaction System Based on Optical Capture Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,

More information

Fast projector-camera calibration for interactive projection mapping

Fast projector-camera calibration for interactive projection mapping 2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Fast projector-camera calibration for interactive projection mapping Oliver Fleischmann

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Christoph Stock, Ulrich Mühlmann, Manmohan Krishna Chandraker, Axel Pinz Institute of Electrical Measurement and Measurement

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 26 February 2009 Figure 1: Structured

More information

Mouse Pointer Tracking with Eyes

Mouse Pointer Tracking with Eyes Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya

More information

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation

Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation Exploitation of GPS-Control Points in low-contrast IR-imagery for homography estimation Patrick Dunau 1 Fraunhofer-Institute, of Optronics, Image Exploitation and System Technologies (IOSB), Gutleuthausstr.

More information

Using Edge Detection in Machine Vision Gauging Applications

Using Edge Detection in Machine Vision Gauging Applications Application Note 125 Using Edge Detection in Machine Vision Gauging Applications John Hanks Introduction This application note introduces common edge-detection software strategies for applications such

More information

Lumaxis, Sunset Hills Rd., Ste. 106, Reston, VA 20190

Lumaxis, Sunset Hills Rd., Ste. 106, Reston, VA 20190 White Paper High Performance Projection Engines for 3D Metrology Systems www.lumaxis.net Lumaxis, 11495 Sunset Hills Rd., Ste. 106, Reston, VA 20190 Introduction 3D optical metrology using structured light

More information

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera [10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The

More information

Surround Structured Lighting for Full Object Scanning

Surround Structured Lighting for Full Object Scanning Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b 6th International Conference on Machinery, Materials, Environment, Biotechnology and Computer (MMEBC 2016) An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

More information

A Method for Identifying Irregular Lattices of Hexagonal Tiles in Real-time

A Method for Identifying Irregular Lattices of Hexagonal Tiles in Real-time S. E. Ashley, R. Green, A Method for Identifying Irregular Lattices of Hexagonal Tiles in Real-Time, Proceedings of Image and Vision Computing New Zealand 2007, pp. 271 275, Hamilton, New Zealand, December

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Stereo Vision Image Processing Strategy for Moving Object Detecting

Stereo Vision Image Processing Strategy for Moving Object Detecting Stereo Vision Image Processing Strategy for Moving Object Detecting SHIUH-JER HUANG, FU-REN YING Department of Mechanical Engineering National Taiwan University of Science and Technology No. 43, Keelung

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

A three-step system calibration procedure with error compensation for 3D shape measurement

A three-step system calibration procedure with error compensation for 3D shape measurement January 10, 2010 / Vol. 8, No. 1 / CHINESE OPTICS LETTERS 33 A three-step system calibration procedure with error compensation for 3D shape measurement Haihua Cui ( ), Wenhe Liao ( ), Xiaosheng Cheng (

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

An Image Based 3D Reconstruction System for Large Indoor Scenes

An Image Based 3D Reconstruction System for Large Indoor Scenes 36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG

More information

Real Time Exact 3d Modeling of Objects from 2d Images using Voxelisation

Real Time Exact 3d Modeling of Objects from 2d Images using Voxelisation Real Time Exact 3d Modeling of Objects from 2d Images using Voxelisation A.Sidhaarthan* B.Bhuvaneshwari N.Jayanth ABSTRACT Reconstruction of 3D object models using voxelisation can be a better way for

More information

CLUTCHING AND LAYER-SWITCHING: INTERACTION TECHNIQUES FOR PROJECTION-PHONE

CLUTCHING AND LAYER-SWITCHING: INTERACTION TECHNIQUES FOR PROJECTION-PHONE CLUCHING AND LAYER-SWICHING: INERACION ECHNIQUES FOR PROJECION-PHONE S. SEO, B. SHIZUKI AND J. ANAKA Department of Computer Science, Graduate School of Systems and Information Engineering, University of

More information

Research on QR Code Image Pre-processing Algorithm under Complex Background

Research on QR Code Image Pre-processing Algorithm under Complex Background Scientific Journal of Information Engineering May 207, Volume 7, Issue, PP.-7 Research on QR Code Image Pre-processing Algorithm under Complex Background Lei Liu, Lin-li Zhou, Huifang Bao. Institute of

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

Adaptive Skin Color Classifier for Face Outline Models

Adaptive Skin Color Classifier for Face Outline Models Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de

More information

Augmenting Reality with Projected Interactive Displays

Augmenting Reality with Projected Interactive Displays Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC)

Structured Light II. Guido Gerig CS 6320, Spring (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) Structured Light II Guido Gerig CS 6320, Spring 2013 (thanks: slides Prof. S. Narasimhan, CMU, Marc Pollefeys, UNC) http://www.cs.cmu.edu/afs/cs/academic/class/15385- s06/lectures/ppts/lec-17.ppt Variant

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using 2D Fourier Image Reconstruction

Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using 2D Fourier Image Reconstruction Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using D Fourier Image Reconstruction Du-Ming Tsai, and Yan-Hsin Tseng Department of Industrial Engineering and Management

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Hyeonah Jeong 1 and Hoon Yoo 2 * 1 Department of Computer Science, SangMyung University, Korea.

More information

: Easy 3D Calibration of laser triangulation systems. Fredrik Nilsson Product Manager, SICK, BU Vision

: Easy 3D Calibration of laser triangulation systems. Fredrik Nilsson Product Manager, SICK, BU Vision : Easy 3D Calibration of laser triangulation systems Fredrik Nilsson Product Manager, SICK, BU Vision Using 3D for Machine Vision solutions : 3D imaging is becoming more important and well accepted for

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

Shift estimation method based fringe pattern profilometry and performance comparison

Shift estimation method based fringe pattern profilometry and performance comparison University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2005 Shift estimation method based fringe pattern profilometry and performance

More information

HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS

HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS Takeo MIYASAKA, Kazuhiro KURODA, Makoto HIROSE and Kazuo ARAKI School of Computer and Cognitive Sciences,

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

A New Algorithm for Shape Detection

A New Algorithm for Shape Detection IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 19, Issue 3, Ver. I (May.-June. 2017), PP 71-76 www.iosrjournals.org A New Algorithm for Shape Detection Hewa

More information

Model-based Enhancement of Lighting Conditions in Image Sequences

Model-based Enhancement of Lighting Conditions in Image Sequences Model-based Enhancement of Lighting Conditions in Image Sequences Peter Eisert and Bernd Girod Information Systems Laboratory Stanford University {eisert,bgirod}@stanford.edu http://www.stanford.edu/ eisert

More information

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Lei Huang,* Chi Seng Ng, and Anand Krishna Asundi School of Mechanical and Aerospace Engineering, Nanyang Technological

More information

A User-Friendly Method to Geometrically Calibrate Projector-Camera Systems

A User-Friendly Method to Geometrically Calibrate Projector-Camera Systems A User-Friendly Method to Geometrically Calibrate Projector-Camera Systems Samuel Audet and Masatoshi Okutomi Tokyo Institute of Technology 2-12-1 Ookayama, Meguro-ku, Tokyo, Japan saudet@ok.ctrl.titech.ac.jp

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA V. A. Knyaz * GosNIIAS, State Research Institute of Aviation System, 539 Moscow, Russia knyaz@gosniias.ru

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

Fabric Defect Detection Based on Computer Vision

Fabric Defect Detection Based on Computer Vision Fabric Defect Detection Based on Computer Vision Jing Sun and Zhiyu Zhou College of Information and Electronics, Zhejiang Sci-Tech University, Hangzhou, China {jings531,zhouzhiyu1993}@163.com Abstract.

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

ENGN D Photography / Spring 2018 / SYLLABUS

ENGN D Photography / Spring 2018 / SYLLABUS ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

CT Reconstruction with Good-Orientation and Layer Separation for Multilayer Objects

CT Reconstruction with Good-Orientation and Layer Separation for Multilayer Objects 17th World Conference on Nondestructive Testing, 25-28 Oct 2008, Shanghai, China CT Reconstruction with Good-Orientation and Layer Separation for Multilayer Objects Tong LIU 1, Brian Stephan WONG 2, Tai

More information

Numerical Recognition in the Verification Process of Mechanical and Electronic Coal Mine Anemometer

Numerical Recognition in the Verification Process of Mechanical and Electronic Coal Mine Anemometer , pp.436-440 http://dx.doi.org/10.14257/astl.2013.29.89 Numerical Recognition in the Verification Process of Mechanical and Electronic Coal Mine Anemometer Fanjian Ying 1, An Wang*, 1,2, Yang Wang 1, 1

More information

Research on Evaluation Method of Video Stabilization

Research on Evaluation Method of Video Stabilization International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration

Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration Performance Study of Quaternion and Matrix Based Orientation for Camera Calibration Rigoberto Juarez-Salazar 1, Carlos Robledo-Sánchez 2, Fermín Guerrero-Sánchez 2, J. Jacobo Oliveros-Oliveros 2, C. Meneses-Fabian

More information

3D Reconstruction from Scene Knowledge

3D Reconstruction from Scene Knowledge Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment

More information

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination Rendering Pipeline 3D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination 3D Polygon Rendering What steps are necessary to utilize spatial coherence while drawing

More information

Analysis Range-Free Node Location Algorithm in WSN

Analysis Range-Free Node Location Algorithm in WSN International Conference on Education, Management and Computer Science (ICEMC 2016) Analysis Range-Free Node Location Algorithm in WSN Xiaojun Liu1, a and Jianyu Wang1 1 School of Transportation Huanggang

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest

More information

Car License Plate Detection Based on Line Segments

Car License Plate Detection Based on Line Segments , pp.99-103 http://dx.doi.org/10.14257/astl.2014.58.21 Car License Plate Detection Based on Line Segments Dongwook Kim 1, Liu Zheng Dept. of Information & Communication Eng., Jeonju Univ. Abstract. In

More information

MODERN DIMENSIONAL MEASURING TECHNIQUES BASED ON OPTICAL PRINCIPLES

MODERN DIMENSIONAL MEASURING TECHNIQUES BASED ON OPTICAL PRINCIPLES MODERN DIMENSIONAL MEASURING TECHNIQUES BASED ON OPTICAL PRINCIPLES J. Reichweger 1, J. Enzendorfer 1 and E. Müller 2 1 Steyr Daimler Puch Engineering Center Steyr GmbH Schönauerstrasse 5, A-4400 Steyr,

More information

Handy Rangefinder for Active Robot Vision

Handy Rangefinder for Active Robot Vision Handy Rangefinder for Active Robot Vision Kazuyuki Hattori Yukio Sato Department of Electrical and Computer Engineering Nagoya Institute of Technology Showa, Nagoya 466, Japan Abstract A compact and high-speed

More information

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14 Vision OCR and OCV Application Guide 1.00 OCR and OCV Application Guide 1/14 General considerations on OCR Encoded information into text and codes can be automatically extracted through a 2D imager device.

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

A novel point matching method for stereovision measurement using RANSAC affine transformation

A novel point matching method for stereovision measurement using RANSAC affine transformation A novel point matching method for stereovision measurement using RANSAC affine transformation Naiguang Lu, Peng Sun, Wenyi Deng, Lianqing Zhu, Xiaoping Lou School of Optoelectronic Information & Telecommunication

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model

Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Dynamic 3-D surface profilometry using a novel color pattern encoded with a multiple triangular model Liang-Chia Chen and Xuan-Loc Nguyen Graduate Institute of Automation Technology National Taipei University

More information