LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
|
|
- Bethanie Heath
- 5 years ago
- Views:
Transcription
1 F LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea, 2 Yonsei University, Republic of Korea KEYWORDS Driver assistant system, automatic parking assistance system, collision warning, pedestrian detection, light stripe projection ABSTRACT Automatic parking system is one of the most interesting driver assistance systems because it is recognized as a convenience system used in daily life and it is expected to become the first widely deployed driver assistance system using state-of-the-art technologies. Generally, automatic parking system consists of three components: target position designation, path planning, and path tracking by active steering. There are various methodologies to establish the target parking position. We proposed light stripe projection (LSP)-based target position designation to deal with parking situation with dark illumination condition such as in underground parking space (1). Although responsibility during parking operation is on the driver, detection of pedestrian during backing manoeuvre should be beneficial to the driver. The authors of (2) developed vision-based pedestrian detection using moving obstacle detection. However, it assumed that visual condition was good such that pedestrian had distinctive appearance in captured image. This paper proposes that LSP-based 3D reconstruction technology used in target position designation can be used for pedestrian recognition in dark illumination condition. TECHICAL PAPER INTRODUCTION As shown in Terry Costlow s automotive camera needs prediction (3), vision-based parking assistance system attracts rapidly rising interest. As shown in Fig. 1, the number of parking assistance system is expected to grow and reach 23,353k units/year in Particularly, automatic parking system is one of the most interesting driver assistance systems because it is recognized as a convenience system used in daily life and it is expected to become the first widely deployed driver assistance system using state-of-the-art technologies. Fig. 1. Needs prediction of automotive camera (3).
2 Generally, automatic parking system consists of three components: target position designation, path planning, and path tracking by active steering. There are various methodologies to establish the target parking position (4). We proposed light stripe projection (LSP)-based target position designation to deal with parking situation with dark illumination condition such as in underground parking space (1). The system adds one light plane projector, that is line laser module, to rear view camera-based automatic parking system. The projected light plane makes light stripe feature (LSF) on objects, whose image captured by camera can be transformed into three-dimensional (3D) information. With reconstructed 3D information, the system recognizes clusters and finds free parking space. The system was expected to be an economic solution for dark illumination condition, which was one of the most difficult problems to vision-based approaches. Two kinds of accidents are considerable during parking operation: crash with adjacent vehicles or pillars, and crash with passing pedestrian. The first accident type is supposed to be caused by inexact target position designation or poor tracking performance. They can be improved and guaranteed before the deployment. Contrarily, the second accident type can not be predicted because passing pedestrian is not the part of automatic parking system. Therefore, although responsibility during parking operation is on the driver, detection of pedestrian during backing manoeuvre should be researched. The authors of (2) developed vision-based pedestrian detection using moving obstacle detection. However, it assumed that visual condition was good such that pedestrian had distinctive appearance in captured image. This paper proposes that LSP-based 3D reconstruction technology used in target position designation can be used for pedestrian recognition in dark illumination condition. Target parking position is established by LSP-based free space detection method (1) then is updated during backing manoeuvre by odometry. During parking operation, system continuously detects LSF and check whether any LSF cluster in bird s eye view is located inside the target parking position. LSP-BASED 3D RECONSTRUCTION System Configuration Proposed system can be implemented simply by installing a light plane projector at the backend of vehicle as shown in Fig. 2. NIR (Near InfraRed) line laser is used as the light plane projector not to border neighboring people. Because parking assist system should provide backward image to driver, band-pass filter, commonly used with NIR light source, can not be used. On the contrary, in order to acquire image in NIR range, infrared cut filter generally attached to camera lens is removed. To capture backward image with large FOV (Field Of View), fisheye lens is used. With radial distortion parameters measured through calibration procedure, input image is rectified to be undistorted image. In order to robustly detect light stripe without band-pass filter, difference image between image with light projector on and image with light projector off is used. Turning the light plane projector on during short period, system acquires backward image including light stripe. Turning the light plane projector off, system acquires visible band image. Difference image between them can extract light stripe irrespective of surrounding environment. Furthermore, because light plane projector is turned on during only short time, it can guarantee eye-safe operation with comparatively high power light source (5).
3 Fig. 2. System configuration Light Stripe Projection Method Finding the intersecting point between plane Π and line Op can calculate the coordinates of stripe pixel point P as sown in Fig. 3. Π denotes the plane generated by light plane projector. Line laser projected onto object surface forms a light stripe. p (x, y) denotes the point on image plane corresponding to a point P (X,Y,Z) on light stripe. Fig. 3. Relation between light plane and light stripe image. The coordinate of P can be calculated as follows (6): xb tanα cos X = f tanα ( x sin + y cos ) yb tanα cos Y = f tanα x sin + y cos Z = ( ) f b tanα cos f tanα x sin + y cos ( )
4 Light plane Π meets Y-axis at point P0 (0, -b, 0). The angle between the plane and Y-axis is α and the angle between the plane and X-axis is. Distance between camera and P0, i.e. baseline, b and the between-angle α are calculated by configuration normalization. Fig. 4 shows the example of reconstructed 3D information. Fig. 4. 3D reconstruction result. PARKING SPACE DETECTION AND TRACKING Viable free parking space is detected by four steps: occlusion detection, pivot detection, opposite side reference point detection and target parking position establishment (1). Once the target parking position is established, the system generates path plan and controls steering angle to track it (7). Occlusion Detection Occlusion is defined as a 3D point pair, which is located in adjacent column in stripe image and is far more than a certain threshold, e.g. ego-vehicle s length, from each other in XZ plane. Occlusions are detected by checking the above mentioned conditions on detected light stripe. Two 3D points belonging to an occlusion have a property about whether it is left-end or right-end of consecutive stripe pixels. Here, left-end occlusion point and right-end occlusion point make one occlusion. If the left-end occlusion point of an occlusion is nearer than the right-end occlusion point from camera, the pair is supposed to be on the left-end of interesting object. In this application, the interesting object means object existing on the left
5 or right side of free space. If the left-end occlusion point is further than the right-end occlusion point from camera, the occlusion is supposed to be on the right-end of interesting object. Such characteristic is attached to occlusion as directional information. In Fig. 5(a), the occlusion at the left-end of interesting object is drawn by red line and the occlusion at the right-end of interesting object is drawn by green line. Pivot Detection Pivot occlusion is defined as an occlusion which satisfies the below conditions: 1) There is free space in the direction of occlusion. The region of free space checking is semicircle, of which straight side is the line between occlusion points and of which radius is ego-vehicle s width and of which center is the nearer occlusion point. 2) It is far from FOV border by sufficiently large distance, e.g. ego-vehicle s width. 3) It is nearer than any other candidates from optical axis. Pivot, center of rotation, is the nearer occlusion point of pivot occlusion. Fig. 5(b) shows recognized pivot with + marking. Recognition of Opposite-Side Reference Point It is assumed that recognized 3D points, whose distance from pivot is smaller than a certain threshold, e.g. ego-vehicle s length, in the direction of pivot occlusion, belong to oppositeside object. Fig. 5(c) shows detected 3D points belonging to opposite-side object. Among these 3D points, one point nearest to pivot becomes the initial opposite-side reference point. Using points whose distance from opposite-side reference point is smaller than a certain threshold, e.g. 2/3 of ego-vehicle s length, in the direction going away from camera and perpendicular to pivot, the direction of opposite-side object s side can be estimated. Fig. 5(d) shows the estimated direction of opposite-side object s side based on 3D points marked by yellow color. Estimated side of opposite-side object is marked by blue line. Establishment of Target Parking Position Rectangular target position is established at the center between pivot and opposite-side reference point. Target position s short side is aligned to the line between pivot and oppositeside reference point. The width of target position is the width of ego-vehicle and the length of target position is the length of ego-vehicle. Fig. 5(e) shows the finally established target position. Fig.5(f) shows the established target position projected on input image. Path Plan and Tracking Path plan can be generated by connecting three path sections: two straight line-segments and one circular arc. The first part is a straight line-segment passing the vehicle center in the longitudinal direction and the third part is the central line of target parking position. A circular arc connects the two line segments with mechanically allowable radius (7). To make vehicle model, vehicle data was used such as wheelbase, track, length, overhang and maximum steering angle. The vehicle model is popular Ackerman (or bicycle) model, assuming that no slip of wheels occurred due to low speed. With the vehicle model, the system can update the position of the subjective vehicle. By applying the same motion to the target position inversely, the system can update the target position continuously (7).
6 (a) Detected occlusion points and occlusion (b) Pivot detection result (c) Opposite-side reference points (d) estimated direction of opposite-side object s side (e) Established target potion (f) Target position projected on input image. Fig. 5. Free parking space detection procedure.
7 LSP-BASED PEDESTRIAN DETECTION Target parking position is established by LSP-based free space detection method then is updated during backing manoeuvre by odometry (7). During parking operation, system continuously detects LSF and check whether any LSF cluster in bird s eye view is located inside the target parking position. Fig. 6(a) shows detected LSFs when a pedestrian is moving inside the target parking position. Fig. 6(b) shows LSFs in bird s eye view and recognized pedestrian appearing as a LSF cluster inside the target parking position. (a) Rear view scene with detected light stripe features. Circle denotes a pedestrian inside the target parking position. Detected pedestrian Target parking position Planned path (b) Detected light stripe features in top view. The pedestrian denoted by circle causes warning to the driver because it is inside the target parking position. Fig. 6. Light stripe projection-based target parking position establishment and pedestrian detection.
8 EXPERIMENTAL RESULTS To verify the efficiency of the proposed method, situations with pedestrian at different distances were investigated. Fig. 7 shows the situations and recognized pedestrians. Fig. 7(a) shows the situation when a pedestrian is entering the adjacent vehicle and is inside the target parking position. Fig. 7(b) and (c) shows the situations when a pedestrian is on the backing path. It is noteworthy that the same method can be applied to pedestrians with various distances. (a) When a pedestrian is entering the adjacent vehicle. (b) When a pedestrian is on the backing path. (c) When a pedestrian is just behind the back of the subjective vehicle. Fig. 7. Detected pedestrian at various distances.
9 CONCLUSION Our previous paper (7) proposed a novel light stripe projection based free parking space recognition method in order to overcome the common drawbacks of existing vision based target position designation methods in dark indoor parking site. The proposed method is expected to be practical solution because it can be implemented simply by installing a lowcost light plane projector on existing parking monitoring system and it uses simple mathematics and computation to recognized 3D information of parking site. Various experiments show that the proposed method can successfully recognize target parking position in spite of various illumination conditions. In this paper, we proposed that LSP-based 3D reconstruction method can not only establish target parking position but also detect passing pedestrian to avoid accident in dark illumination condition. As the system updates target parking position using odometry and acquires 3D information continuously, it can detect pedestrian simply by checking whether there is range clusters inside the target parking position. REFERENCES (1) Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, and Jaihie Kim, Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System, Proceedings of the 2007 IEEE Intelligent Vehicle Symposium, Jun , (2) S. Wybo, R. Bendahan, S. Bougnoux, C. Vestri, F. Abad, and T. Kakinami, Improving backing-up manoeuvre safety with vision-based movement detection, IET Intelligent Transport Systems, vol. 1, no. 2, Jun. 2007, pp (3) Terry Costlow, Shifting into active mode, Automotive Engineering International, Jun. 2007, pp (4) Ho Gi Jung, Young Ha Cho, Pal Joo Yoon, and Jaihie Kim, Scanning Laser Radar- Based Target Parking Position Designation Method for Parking Aid System, IEEE Transactions on Intelligent Transportation Systems, Accepted for future publication, Digital Object Identifier: /TITS (5) C. Mertz, J. Kozar, J. R. Miller, and C. Thorpe, Eye-safe laser striper for outside use, in Proc. IEEE Intelligent Vehicle Symposium, Jun , 2002, pp , Vol. 2. (6) Reinhard Klette, Karsten Schlüns, and Andreas Koschan, Computer Vision Three Dimensional Data from Images, 1998, Springer-Verlag. (7) Chi Gun Choi, Dong Suk Kim, Ho Gi Jung, and Pal Joo Yoon, Stereo Vision Based Parking Assist System, SAE Paper No.:
Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System
Proceedings of the 2007 IEEE Intelligent Vehicles Symposium Istanbul, Turkey, June 13-15, 2007 ThF1.1 Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System Ho Gi Jung,
More informationNOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM
F2006D130T NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM 1,2 Jung, Ho Gi, 1 Kim, Dong Suk *, 1 Yoon, Pal Joo, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea, 2 Yonsei University,
More informationSemi-automatic Parking System Recognizing Parking Lot Markings
Proceedings of AVEC 06 The 8 th International Symposium on Advanced Vehicle Control, August 20-24, 2006, Taipei, Taiwan AVEC060186 Semi-automatic Parking System Recognizing Parking Lot Markings Ho Gi Jung
More informationStructure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System
Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Ho Gi Jung 1, 2, Dong Suk Kim 1, Pal Joo Yoon 1, and Jaihie Kim 2 1 MANDO Corporation Central R&D Center, Advanced
More informationRECENTLY, customers have shown a growing interest in
406 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Scanning Laser Radar-Based Target Position Designation for Parking Aid System Ho Gi Jung, Member, IEEE, Young
More informationStereo Vision Based Advanced Driver Assistance System
Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496
More informationSensor Fusion-Based Parking Assist System
Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:
More informationRobotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007
Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem
More informationPattern Feature Detection for Camera Calibration Using Circular Sample
Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,
More information차세대지능형자동차를위한신호처리기술 정호기
차세대지능형자동차를위한신호처리기술 008.08. 정호기 E-mail: hgjung@mando.com hgjung@yonsei.ac.kr 0 . 지능형자동차의미래 ) 단위 system functions 운전자상황인식 얼굴방향인식 시선방향인식 졸음운전인식 운전능력상실인식 차선인식, 전방장애물검출및분류 Lane Keeping System + Adaptive Cruise
More informationAUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR
AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationA Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision
A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has
More informationSTEREO VISION-BASED FORWARD OBSTACLE DETECTION
International Journal of Automotive Technology, Vol. 8, No. 4, pp. 493 504 (2007) Copyright 2007 KSAE 1229 9138/2007/035 12 STEREO VISION-BASED FORWARD OBSTACLE DETECTION H. G. JUNG 1),2)*, Y. H. LEE 1),
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationProject 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)
Announcements Stereo Project 2 due today Project 3 out today Single image stereogram, by Niklas Een Readings Szeliski, Chapter 10 (through 10.5) Public Library, Stereoscopic Looking Room, Chicago, by Phillips,
More informationIntroducing 3D Phase Measurement
Introducing 3D Phase Measurement New Optical Measurement Technique For the XLG3 Video Borescope Based on phase shifting principles of optical metrology, 3D PM is an Exciting Breakthrough for the RVI industry.
More informationRectification of distorted elemental image array using four markers in three-dimensional integral imaging
Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Hyeonah Jeong 1 and Hoon Yoo 2 * 1 Department of Computer Science, SangMyung University, Korea.
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationLecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009
Model s Lecture 3 Sections 2.2, 4.4 World s Eye s Clip s s s Window s Hampden-Sydney College Mon, Aug 31, 2009 Outline Model s World s Eye s Clip s s s Window s 1 2 3 Model s World s Eye s Clip s s s Window
More informationPan-Tilt-Zoom Based Iris Image Capturing System for Unconstrained User Environments at a Distance
Pan-Tilt-Zoom Based Iris Image Capturing System for Unconstrained User Environments at a Distance Sowon Yoon 1, Kwanghyuk Bae 1, Kang Ryoung Park 2, and Jaihie Kim 1 1 School of Electrical and Electronic
More informationLaser Sensor for Obstacle Detection of AGV
Laser Sensor for Obstacle Detection of AGV Kyoung-Taik Park*, Young-Tae Shin and Byung-Su Kang * Nano-Mechanism Lab, Department of Intelligent Precision Machine Korea Institute of Machinery & Materials
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationAUTOMATIC parking systems consist of three components:
616 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 59, NO. 2, FEBRUARY 2010 Uniform User Interface for Semiautomatic Parking Slot Marking Recognition Ho Gi Jung, Member, IEEE, Yun Hee Lee, Member, IEEE,
More informationS-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT
S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT 1 SOE YU MAUNG MAUNG, 2 NU NU WIN, 3 MYINT HTAY 1,2,3 Mechatronic Engineering Department, Mandalay Technological University, The Republic
More informationMonitoring surrounding areas of truck-trailer combinations
Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationVehicle Camera Systems
FAMA BUYER S GUIDE TC061 Prepared by the FAMA Electrical Subcommittee This guide does not endorse any manufacturer or product Page 1 of 7 Contents Introduction...3 Overview...3 Backup Cameras...3 Forward
More informationThree-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera
Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract
More informationResearch on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System
Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing
More informationAn Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow
, pp.247-251 http://dx.doi.org/10.14257/astl.2015.99.58 An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow Jin Woo Choi 1, Jae Seoung Kim 2, Taeg Kuen Whangbo
More informationAutomatic free parking space detection by using motion stereo-based 3D reconstruction
Automatic free parking space detection by using motion stereo-based 3D reconstruction Computer Vision Lab. Jae Kyu Suhr Contents Introduction Fisheye lens calibration Corresponding point extraction 3D
More informationDigital Image Correlation of Stereoscopic Images for Radial Metrology
Digital Image Correlation of Stereoscopic Images for Radial Metrology John A. Gilbert Professor of Mechanical Engineering University of Alabama in Huntsville Huntsville, AL 35899 Donald R. Matthys Professor
More informationDEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD
DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,
More informationCamera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993
Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video
More informationDISTANCE MEASUREMENT USING STEREO VISION
DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More informationForward Sensing System for LKS+ACC
SAE TECHNICAL PAPER SERIES 008-01-005 Forward Sensing System for LKS+ACC Ho Gi Jung, Yun Hee Lee and Pal Joo Yoon MANDO Corporation Jaihie Kim Yonsei University Reprinted From: Intelligent Vehicle Iniative
More informationStructured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based
More information(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than
An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,
More informationAdvanced Driver Assistance: Modular Image Sensor Concept
Vision Advanced Driver Assistance: Modular Image Sensor Concept Supplying value. Integrated Passive and Active Safety Systems Active Safety Passive Safety Scope Reduction of accident probability Get ready
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationOmnivergent Stereo-panoramas with a Fish-eye Lens
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz
More informationOutlier rejection for cameras on intelligent vehicles
Available online at www.sciencedirect.com Pattern Recognition Letters 29 (28) 828 84 www.elsevier.com/locate/patrec Outlier rejection for cameras on intelligent vehicles Jae Kyu Suhr a, Ho Gi Jung a,b,
More informationEfficient Stereo Image Rectification Method Using Horizontal Baseline
Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationCalibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern
Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook
More informationCreating a distortion characterisation dataset for visual band cameras using fiducial markers.
Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council
More informationDEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER
S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,
More informationVehicle Detection Method using Haar-like Feature on Real Time System
Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.
More informationA threshold decision of the object image by using the smart tag
A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (
More informationReconstructing Images of Bar Codes for Construction Site Object Recognition 1
Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing
More informationAUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA
AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA V. A. Knyaz * GosNIIAS, State Research Institute of Aviation System, 539 Moscow, Russia knyaz@gosniias.ru
More informationGeneration of a Disparity Map Using Piecewise Linear Transformation
Proceedings of the 5th WSEAS Int. Conf. on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Venice, Italy, November 2-22, 26 4 Generation of a Disparity Map Using Piecewise Linear Transformation
More informationIEEE copyright notice
IEEE copyright notice Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for
More informationProject Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)
Address: Hong Kong Polytechnic University, Phase 8, Hung Hom, Kowloon, Hong Kong. Telephone: (852) 3400 8441 Email: cnerc.steel@polyu.edu.hk Website: https://www.polyu.edu.hk/cnerc-steel/ Project Title:
More informationProject 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)
Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.
More informationVisual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks
Proceedings of the 1999 IEEVRSJ International Conference on Intelligent Robots and Srjtems Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks J. Y. Kim* and H. S. Cho** * Department of Robot
More informationCHAPTER 5 MOTION DETECTION AND ANALYSIS
CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series
More informationReal-time Stereo Vision for Urban Traffic Scene Understanding
Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearborn (MI), USA October 3-5, 2000 Real-time Stereo Vision for Urban Traffic Scene Understanding U. Franke, A. Joos DaimlerChrylser AG D-70546
More informationCONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT
CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT Wan Ishak Wan Ismail, a, b, Mohd. Hudzari Razali, a a Department of Biological and Agriculture Engineering, Faculty of Engineering b Intelligent System and
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More information3D object recognition used by team robotto
3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationMirror Based Framework for Human Body Measurement
362 Mirror Based Framework for Human Body Measurement 1 Takeshi Hashimoto, 2 Takayuki Suzuki, 3 András Rövid 1 Dept. of Electrical and Electronics Engineering, Shizuoka University 5-1, 3-chome Johoku,
More informationDepartment of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea
Image quality enhancement of computational integral imaging reconstruction for partially occluded objects using binary weighting mask on occlusion areas Joon-Jae Lee, 1 Byung-Gook Lee, 2 and Hoon Yoo 3,
More informationFundamental Technologies Driving the Evolution of Autonomous Driving
426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo
More informationRealtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments
Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationChapter 8: Physical Optics
Chapter 8: Physical Optics Whether light is a particle or a wave had puzzled physicists for centuries. In this chapter, we only analyze light as a wave using basic optical concepts such as interference
More informationAutomatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud
, pp.43-47 http://dx.doi.org/10.14257/astl.2014.67.11 Automatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud Ashok Kumar Patil, Seong Sill Park, Pavitra Holi,
More informationusing an omnidirectional camera, sufficient information for controlled play can be collected. Another example for the use of omnidirectional cameras i
An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,
More informationAutomatic Parking Space Detection and Tracking for Underground and Indoor Environments
Automatic Parking Space Detection and Tracking for Underground and Indoor Environments Jae Kyu Suhr, Member, IEEE, and Ho Gi Jung, Senior Member, IEEE Abstract Even though many public parking lots are
More informationAbsolute Scale Structure from Motion Using a Refractive Plate
Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and
More informationcse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry
cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationRecognition of Road Contours Based on Extraction of 3D Positions of Delineators
Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference Seattle, WA, USA, Sept. 30 - Oct. 3, 2007 TuD5.2 Recognition of Road Contours Based on Extraction of 3D Positions of Delineators
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More information3B SCIENTIFIC PHYSICS
3B SCIENTIFIC PHYSICS Instruction sheet 06/18 ALF Laser Optics Demonstration Set Laser Optics Supplement Set Page 1 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14
More informationPreceding vehicle detection and distance estimation. lane change, warning system.
Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationExam in DD2426 Robotics and Autonomous Systems
Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20
More informationVision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy
Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University
More informationFace Cyclographs for Recognition
Face Cyclographs for Recognition Guodong Guo Department of Computer Science North Carolina Central University E-mail: gdguo@nccu.edu Charles R. Dyer Computer Sciences Department University of Wisconsin-Madison
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More information