NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM

Size: px
Start display at page:

Download "NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM"

Transcription

1 F2006D130T NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM 1,2 Jung, Ho Gi, 1 Kim, Dong Suk *, 1 Yoon, Pal Joo, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea, 2 Yonsei University, Republic of Korea KEYWORDS Automatic parking assistance system, target position designation, drag&drop user interface, computer vision, driver convenience system ABSTRACT This paper proposes a novel user interface for semi-automatic parking assistance system, which automates steering handling during parking operation. In spite of recent progresses of automatic target position designation method, manual designation is supposed to have two important roles. First, manual designation can be used to refine the target position established by automatic designation method. Second, manual designation is necessary for the backup of automatic designation method. Proposed user interface provides an easy-to-use manual designation method based on drag&drop concept. Target position is depicted as a rectangle in touch screen based HMI(Human Machine Interface). Driver can move the rectangle by dragging the inside of rectangle. Driver can rotate the rectangle by dragging the outside of rectangle. We compare the proposed method with multiple-arrow based method, which provides several arrow buttons to move and rotate target position, by measuring total operation time and clicking number. We can conclude that proposed method shortens the operation time and reduces the clicking number. TECHICAL PAPER INTRODUCTION Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. Because recently driver s interest about parking assist system increases drastically, car manufacturers and component providers are developing various kinds of parking assist systems (1)(2). Fig. 1 shows the configuration of semiautomatic parking system currently being developed. The system consists of six components: Electric Power Steering (EPS) for active steering, vision sensor acquiring rear-view image, ultra-sonic sensors measuring distances to nearby side/rear obstacles, touch screen based Human Machine Interface (HMI) providing information to driver and receiving command from driver, Electric Parking Braking (EPB) automatically activating parking brake, and processing computer. Algorithms running on the processing computer consist of three components: target parking position designation, and path tracker that continuously estimates current position and controls steering system to achieve the planned path. There are many kinds of methods for the target parking position designation: manual designation method, range sensor based method, the GPS based method and vision based method. Prius Intelligent Parking Assist System (IPAS), mass-produced by Toyota and AISHIN SEIKI in 2003, is an example of the manual designation method (3). A range sensor based methods are mainly used for parallel parking. The most common range sensor is the ultra-sonic sensor (4)(5). There are researches using a laser scanner (6)(7) or mm-wave radar (8)(9). The GPS based method makes a path-plan then tracks it with GPS and local digital map (10). Recently, the vision based method attracts more and more interests because a vision

2 sensor is already installed and inexpensive compared to the mm-wave radar. The marking based methods establish the target position by recognizing the parking slot markings (11)(12). The object based methods establish the target position by recognizing adjacent vehicles (13)(14). Fig. 1. System configuration of semi-automatic parking system In spite of the rapid progress of the automatic target position designation method, the manual designation is supposed to have two important roles. First, the manual designation can be used to refine the target position established by automatic designation methods. In general, the parking system provides a rear view image to help driver understand the on-going parking operation. Fig. 2 shows the typically installed rear-view camera and user interface. Furthermore, the system needs to receive driver s confirmation about the automatically established target position. At the moment, the driver is able to naturally refine the target position with the manual designation method. Second, the manual designation is necessary for backing up the automatic designation method. Because sensors used in the automatic designation method have their own weakness, the recognition result cannot be always perfect. If the system provides driver a chance to modify the target position by the manual designation method, faults of the automatic designation method can be corrected without serious inconvenience. (a) rear view camera (b) touch screen based HMI Fig. 2. Typical installation of camera and HMI This paper proposes a novel manual designation method to enhance driver s comfort by shortening the operation time and eliminating repetitive operation. The basic idea is based on

3 the drag&drop operation, which is familiar with PC users. The target position is depicted as a rectangle in the touch screen based HMI. The driver can move the rectangle by dragging the inside of rectangle and he/she can rotate the rectangle by dragging the outside of rectangle. To verify the feasibility of this method, experiments with multiple participants are conducted. In the experiment, we consider two kinds of views, i.e. a distorted view and bird s eye view, and two kinds of situations, i.e. garage parking and parallel parking. We compare the proposed method with the multiple-arrow based method, which provides several arrow buttons to move target position, by measuring the total operation time and clicking number. We can conclude that the proposed method shortens the operation time and reduces the clicking number. DRAG&DROP BASED USER INTERFACE Three Coordinate Systems Proposed system compensates the fisheye lens distortion of input image and constructs the bird s eye view image using homography. The installed rear view camera uses fisheye lens, or wide-angle lens, to cover wide Field Of View (FOV) during a parking procedure. As shown in Fig. 3, the input image through fisheye lens can capture a wide range of rear scenes but inevitably includes severe distortions. It is well known that the major factor of the fisheye lens distortion is radial distortion, which is defined in terms of the distance from the image centre (15). Modelling the radial distortion in a 5th order polynomial using the Caltech calibration toolbox and approximating its inverse mapping by a 5th order polynomial, the proposed system acquires an undistorted image as shown in Fig. 3 (16). The homography, which defines a one-to-one correspondence between the coordinate in undistorted image and coordinate in bird s eye view image, can be calculated from the height and angle of camera with respect to the ground surface (12). A bird s eye view is the virtual image taken from the sky assuming all objects are attached onto the ground surface. The general pinhole camera model causes a perspective distortion, by which the size of object image is changing according to the distance from the camera. Contrarily, because a bird s eye view image eliminates the perspective distortion of objects attached onto the ground surface, it is suitable for the recognition of objects painted on the ground surface. The final image of Fig. 3 is the bird s eye view image of the undistorted image. Drag&drop concept Fig. 3. Construction procedure of bird s eye view image The target position is a rectangle in the world coordinate system, or bird s eye view image coordinate system. The target position is managed by its 2D location (X w,z w ) and angle φ

4 with respect to X w -axis. The width and length of target position rectangle are determined based on the ego-vehicle s width and length. With the radial distortion model and homography, a point in the bird s eye view image coordinate system is corresponding to a point in distorted image coordinate system or input image coordinate system. Therefore, by converting every coordinates into bird s eye view image coordinate system, we can implement any kinds of operations in one coordinate system uniformly. The target position rectangle and user input are treated in the bird s eye view image coordinate system, and then are converted to the proper coordinate system according to the display mode. The target position rectangle displayed in the touch screen based HMI acts as a cursor while the driver is establishing the target position. The inside region of rectangular target position is used as a moving cursor. The driver can move the location of target position by dragging the inside as shown in Fig. 4(a). The outside region of rectangular target position is used as a rotating cursor. The driver can rotate the target position, or change the angle, by dragging the outside as shown in Fig. 4(b). Three kinds of operations are needed: 1) method determining whether a driver s input, i.e. pointing point, is in the target position rectangle or not, 2) with the driver s two consecutive inputs, calculation of translation transformation, and 3) with driver s two consecutive inputs, calculation of rotation transformation. (a) Moving by dragging the inside of rectangle Mode Selection (b) Rotating by dragging the outside of rectangle Fig. 4. Target position rectangle as moving and rotating cursor

5 Whether a point is in a rectangle or not can be determined by checking if the point is at the same side of four rectangle-sides in the rotating direction. In this application, the relative location between four corner points cannot be determined because the rectangle can be rotated. Only the order between four corner points is confirmed. C 1, C 2, C 3, C 4 are four corner points of a rectangle in the rotating direction and T is the user pointing point. We can define a crossproduct between two vectors, e.g. C 1 C 2 and C 1 T as depicted in Fig. 5. If all z-components of four cross-products have the same sign, then we can confirm that the point T is located in the rectangle as shown in Fig. 6(a). Contrarily, if any z-components of four cross-products has a different sign, then we can confirm that the point T is located out of the rectangle as shown in Fig. 6(b). Fig. 5. Cross-product between a rectangle-side and corner-pointing (a) Four cross-product have the same direction (b) One cross-product has different direction Translation of Target Position Fig. 6. Determining whether a point is in a rectangle or not A translation transformation is equally applied to every points of target rectangle. Therefore, a new target position can be determined by adding the difference vector between two consecutive user input points, P 1, P 2, to the current target position as shown in Fig. 7(a). Rotation of Target Position A rotation transformation with respect to the centre point C is equally applied to every points of the target rectangle. Therefore, a new target position can be determined by rotating the current target position with respect to C by the between-angle θ of two consecutive user input points as shown in Fig. 7(b).

6 (a) translation vector by difference vector (b) rotation angle by between-angle EXPERIMENTAL RESULTS Fig. 7. Transformation calculation To verify the efficiency of the proposed method, we measure the operation time and clicking number, and then compare the drag&drop based method with the multiple-arrow based method. For garage parking, it is observed that the operation time is reduced by 17.6% and the clicking number is reduced by 64.2%. For parallel parking, it is observed that the operation time is reduced by 29.4% and the clicking number is reduced by 75.1%. Experiment Method Before test, we briefly explain the operation instruction of two methods: the drag&drop based method and multiple-arrow based method. The multiple-arrow based method is similar to the user interface of the first generation Prius. There are 10 arrow buttons, 8 for translation and 2 for rotation. Every participant establishes target positions for 8 situations by both methods. Of these, 4 situations are garage parking and the other 4 situations are parallel parking. For each 4 situations, 2 situations are tested in the bird s eye view image and the other 2 situations are tested in the distorted image. Fig. 8 ~ 11 show the situation 1~8. Total of 50 volunteers participate in the test. Average age is 30.1 in the range of 22 ~ 42. Of these, 41 participants are male and 9 participants are female. Every participant conducts the test only once. The test order between the drag&drop based method and multiple-arrow based method are mixed randomly. (a) situation 1 with drag&drop method (b) situation 1 with arrows method

7 (c) situation 2 with drag&drop method (b) situation 2 with arrows method Fig. 8. Garage parking cases in bird s eye view image (a) situation 3 with drag&drop method (b) situation 3 with arrows method (c) situation 4 with drag&drop method (d) situation 4 with arrows method Fig. 9. Garage parking cases in distorted image (a) situation 5 with drag&drop method (b) situation 5 with arrows method (c) situation 6 with drag&drop method (d) situation 6 with arrows method Fig. 10. Parallel parking cases in bird s eye view image

8 (a) situation 7 with drag&drop method (b) situation 7 with arrows method Test Result (c) situation 8 with drag&drop method (d) situation 8 with arrow method Fig. 11. Parallel parking cases in distorted image Table1 shows the operation time average of 4 garage parking situations. It is observed that the drag&drop based method reduces the operation time by 17.6%. Table2 shows the operation time average of 4 parallel parking situations. It is observed that the drag&drop based method reduces the operation time by 29.4%. Table1. Operation time average of garage parking situations Operation time average Situation No. Drag&Drop(A) Multiple arrow(b) Enhancement, ( A) B (%) Average 17.6 Table2. Operation time average of parallel parking situations Operation time average Situation No. Drag&Drop(A) Multiple arrow(b) Enhancement, ( A) B (%) Average 29.4 Table3 shows the clicking number average of 4 garage parking situations. It is observed that the drag&drop based method reduces the clicking number by 64.2%. Table4 shows the clicking number average of 4 parallel parking situations. It is observed that the drag&drop

9 based method reduces the clicking number by 75.1%. Reduction of the clicking number means the reduction of repetitive operation. Many participants evaluate the point as the most importance advantage of the proposed drag&drop method because the repetitive clicking operation is a truly tedious job. Table3. Clicking number average of garage parking situations Clicking number average Situation No. Drag&Drop(A) Multiple arrow(b) Enhancement, ( A) B (%) Average 64.2 Table4. Clicking number average of parallel parking situations Clicking number average Situation No. Drag&Drop(A) Multiple arrow(b) Enhancement, ( A) B (%) Average 75.1 It is noticeable that there is no tendency with respect to the view. There are no definite difference between the distorted image cases and bird s eye view image cases. However, for parallel parking situations in bird s eye view image, many participants complain about the low quality of bird s eye view image. To make the method more practical, bird s eye view image needs to be enhanced. Finally, we find that to implement the drag&drop method successfully, the sensitivity of the touch screen should be improved because pushing force drops generally during the dragging operation. CONCLUSION In this paper, we propose a novel manual target designation method based on the drag&drop concept. The target position is displayed as a rectangle, and the driver can move the target by dragging the inside and can rotate the target by dragging the outside. Through experiments, we confirm that the proposed method reduces the operation time and clicking number. A major contribution is that with the proposed method driver can quickly establish the target position and avoid tedious repetitive clicking operations. Future works are enhancing the image quality of bird s eye view for parallel parking and enhancing the sensitivity of the touch screen. REFERENCES (1) Richard Bishop, Intelligent Vehicle Technology and Trends, Artech House Pub., 2005

10 (2) Randy Frank, Sensing in the Ultimately Safe Vehicle, Society of Automotive Engineers, SAE Paper No.: , 2004 (3) Masayuki Furutani, Obstacle Detection Systems for Vehicle Safety, Society of Automotive Engineers, SAE Paper No.: , 2004 (4) Wei Chia Lee and Torsten Bertram, Driver Centered Design of an Advanced Parking Assistance, 5 th European Congress and Exhibition on ITS and Services, 2005 (5) J Pohl, M Sethsson, P Degerman, and J Larsson, A semi-automated parallel parking system for passenger cars, Proc. ImechE, Vol. 220, Part D: J. Automobile Engineering, 2006 (6) Alexander Schanz, Andreas Spieker, and Klus-Dieter Kuhnert, Autonomous Parking in Subterranean Garage A look at the Position Estimation -, IEEE Intelligent Vehicles Symposium 2003, pages: , 2003 (7) Christopher Tay Meng Keat, Cédric Pradalier, and Christian Laugier, Vehicle Detection And Car Park Mapping Using Laser Scanner, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), pages: , 2005 (8) Stefan Görner and Hermann Rohling, Parking Lot Detection with 24GHz Radar Sensor, 3 rd International Workshop on Intelligent Transportation (WIT 2006), 2006 (9) M. Klotz, and H. Rohling, A high range resolution radar system network for parking aid applications, 5 th International Conference on Radar Systems, 1999 (10) Massaki Wada, Kang Sup Yoon, and Hideki Hashimoto, Development of Advanced Parking Assistance System, IEEE Transaction on Industrial Electronics, Vol. 50, No. 1, pages: 4-17, 2003 (11) Jin Xu, Guang Chen, and Ming Xie, Vision-Guided Automatic Parking for Smart Car, IEEE Intelligent Vehicles Symposium 2000, pages: , 2000 (12) H. G. Jung, D. S. Kim, P. J. Yoon, and J. H. Kim, 3D Vision System for the Recognition of Free Parking Site Location, International Journal of Automotive Technology, Vol. 7, No. 3, pages: , 2006 (13) Nico Kaempchen, Uwe Franke, and Rainer Ott, Stereo vision based pose estimation of parking lots using 3D vehicle models, IEEE Intelligent Vehicles Sysmposium 2002, Vol. 2, pages: , 2002 (14) C. Vestri, S. Bougnoux, R. Bendahan, K. Fintzel, S. Wybo, F.Abad, and T. Kakinami, Evalution of a Point Tracking Vision System for Parking Assistance, 12 th World Congress on ITS, 2005 (15) J. Salvi, X. Armangué, and J. Batlle, A comparative review of camera calibration methods with accuracy evaluation, Pattern Recognition 35(2002) , 2002 (16) J. Y. Bouguet, Camera Calibration Toolbox for Matlab,

11 Q: There exists a problem in control the position of car, which is called as non holonomic constraint. Your research is on the design of HMI for car parking; however, the problem is related to the position control. If the user input is unacceptable in association with the constraint of position control, how does your HMI respond to the input? A: We developed an algorithm deciding whether current target position is proper or not. We express the state by changing the color of target position rectangle. Therefore, driver continuously drags target position until the target becomes feasible.

Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System

Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Ho Gi Jung 1, 2, Dong Suk Kim 1, Pal Joo Yoon 1, and Jaihie Kim 2 1 MANDO Corporation Central R&D Center, Advanced

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System

Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System Proceedings of the 2007 IEEE Intelligent Vehicles Symposium Istanbul, Turkey, June 13-15, 2007 ThF1.1 Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System Ho Gi Jung,

More information

Semi-automatic Parking System Recognizing Parking Lot Markings

Semi-automatic Parking System Recognizing Parking Lot Markings Proceedings of AVEC 06 The 8 th International Symposium on Advanced Vehicle Control, August 20-24, 2006, Taipei, Taiwan AVEC060186 Semi-automatic Parking System Recognizing Parking Lot Markings Ho Gi Jung

More information

Stereo Vision Based Advanced Driver Assistance System

Stereo Vision Based Advanced Driver Assistance System Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496

More information

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,

More information

RECENTLY, customers have shown a growing interest in

RECENTLY, customers have shown a growing interest in 406 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Scanning Laser Radar-Based Target Position Designation for Parking Aid System Ho Gi Jung, Member, IEEE, Young

More information

THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA

THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA Hwadong Sun 1, Dong Yeop Kim 1 *, Joon Ho Kwon 2, Bong-Seok Kim 1, and Chang-Woo Park 1 1 Intelligent Robotics Research Center,

More information

Sensor Fusion-Based Parking Assist System

Sensor Fusion-Based Parking Assist System Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:

More information

Automatic free parking space detection by using motion stereo-based 3D reconstruction

Automatic free parking space detection by using motion stereo-based 3D reconstruction Automatic free parking space detection by using motion stereo-based 3D reconstruction Computer Vision Lab. Jae Kyu Suhr Contents Introduction Fisheye lens calibration Corresponding point extraction 3D

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing

More information

AUTOMATIC parking systems consist of three components:

AUTOMATIC parking systems consist of three components: 616 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 59, NO. 2, FEBRUARY 2010 Uniform User Interface for Semiautomatic Parking Slot Marking Recognition Ho Gi Jung, Member, IEEE, Yun Hee Lee, Member, IEEE,

More information

STEREO VISION-BASED FORWARD OBSTACLE DETECTION

STEREO VISION-BASED FORWARD OBSTACLE DETECTION International Journal of Automotive Technology, Vol. 8, No. 4, pp. 493 504 (2007) Copyright 2007 KSAE 1229 9138/2007/035 12 STEREO VISION-BASED FORWARD OBSTACLE DETECTION H. G. JUNG 1),2)*, Y. H. LEE 1),

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Pattern Feature Detection for Camera Calibration Using Circular Sample

Pattern Feature Detection for Camera Calibration Using Circular Sample Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

Integrated Vehicle and Lane Detection with Distance Estimation

Integrated Vehicle and Lane Detection with Distance Estimation Integrated Vehicle and Lane Detection with Distance Estimation Yu-Chun Chen, Te-Feng Su, Shang-Hong Lai Department of Computer Science, National Tsing Hua University,Taiwan 30013, R.O.C Abstract. In this

More information

Simultaneous Tele-visualization of Construction Machine and Environment Using Body Mounted Cameras

Simultaneous Tele-visualization of Construction Machine and Environment Using Body Mounted Cameras Simultaneous Tele-visualization of Construction Machine and Environment Using Body Mounted Cameras Wei Sun, Soichiro Iwataki, Ren Komatsu, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Laser Sensor for Obstacle Detection of AGV

Laser Sensor for Obstacle Detection of AGV Laser Sensor for Obstacle Detection of AGV Kyoung-Taik Park*, Young-Tae Shin and Byung-Su Kang * Nano-Mechanism Lab, Department of Intelligent Precision Machine Korea Institute of Machinery & Materials

More information

Outlier rejection for cameras on intelligent vehicles

Outlier rejection for cameras on intelligent vehicles Available online at www.sciencedirect.com Pattern Recognition Letters 29 (28) 828 84 www.elsevier.com/locate/patrec Outlier rejection for cameras on intelligent vehicles Jae Kyu Suhr a, Ho Gi Jung a,b,

More information

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th Designing a software framework for automated driving Dr.-Ing. Sebastian Ohl, 2017 October 12 th Challenges Functional software architecture with open interfaces and a set of well-defined software components

More information

Fundamental Technologies Driving the Evolution of Autonomous Driving

Fundamental Technologies Driving the Evolution of Autonomous Driving 426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo

More information

Depth Estimation Using Monocular Camera

Depth Estimation Using Monocular Camera Depth Estimation Using Monocular Camera Apoorva Joglekar #, Devika Joshi #, Richa Khemani #, Smita Nair *, Shashikant Sahare # # Dept. of Electronics and Telecommunication, Cummins College of Engineering

More information

TILTED WINDOW DETECTION FOR GONDOLA- TYPED FACADE ROBOT

TILTED WINDOW DETECTION FOR GONDOLA- TYPED FACADE ROBOT TILTED WINDOW DETECTION FOR GONDOLA- TYPED FACADE ROBOT Dong Yeop Kim 1, Jongsu Yoon 1, Dong Hoon Cha 1, and Chang-Woo Park 2 1 Intelligent Robotics Research Center, Korea Electronics Technology Institute,

More information

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Yonghoon Ji 1, Atsushi Yamashita 1, and Hajime Asama 1 School of Engineering, The University of Tokyo, Japan, t{ji,

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

Research Fellow, Korea Institute of Civil Engineering and Building Technology, Korea (*corresponding author) 2

Research Fellow, Korea Institute of Civil Engineering and Building Technology, Korea (*corresponding author) 2 Algorithm and Experiment for Vision-Based Recognition of Road Surface Conditions Using Polarization and Wavelet Transform 1 Seung-Ki Ryu *, 2 Taehyeong Kim, 3 Eunjoo Bae, 4 Seung-Rae Lee 1 Research Fellow,

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

An Architecture for Automated Driving in Urban Environments

An Architecture for Automated Driving in Urban Environments An Architecture for Automated Driving in Urban Environments Gang Chen and Thierry Fraichard Inria Rhône-Alpes & LIG-CNRS Lab., Grenoble (FR) firstname.lastname@inrialpes.fr Summary. This paper presents

More information

Spatio temporal Segmentation using Laserscanner and Video Sequences

Spatio temporal Segmentation using Laserscanner and Video Sequences Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D 89081

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

DISTANCE MEASUREMENT FOR SELF-DRIVING CARS USING STEREO CAMERA

DISTANCE MEASUREMENT FOR SELF-DRIVING CARS USING STEREO CAMERA How to cite this paper: Yasir Dawood Salman, Ku Ruhana Ku-Mahamud, & Eiji Kamioka. (2017). Distance measurement for self-driving cars using stereo camera in Zulikha, J. & N. H. Zakaria (Eds.), Proceedings

More information

Outdoor Vacant Parking Space Detector for Improving Mobility in Smart Cities

Outdoor Vacant Parking Space Detector for Improving Mobility in Smart Cities Outdoor Vacant Parking Space Detector for Improving Mobility in Smart Cities Carmen Bravo, Nuria Sánchez *, Noa García, and José Manuel Menéndez Grupo de Aplicación de Telecomunicaciones Visuales, Universidad

More information

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration 1 Lab 1 Camera Calibration Objective In this experiment, students will use stereo cameras, an image acquisition program and camera calibration algorithms to achieve the following goals: 1. Develop a procedure

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System Under supervision of: Prof. Dr. -Ing. Klaus-Dieter Kuhnert Dipl.-Inform.

More information

Research Article. ISSN (Print) *Corresponding author Chen Hao

Research Article. ISSN (Print) *Corresponding author Chen Hao Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 215; 3(6):645-65 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM

MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM J. KSIAM Vol.14, No.1, 57 66, 2010 MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM SUNHEE KIM, SEUNGMI OH, AND MYUNGJOO KANG DEPARTMENT OF MATHEMATICAL SCIENCES, SEOUL NATIONAL

More information

Automatic Parking Space Detection and Tracking for Underground and Indoor Environments

Automatic Parking Space Detection and Tracking for Underground and Indoor Environments Automatic Parking Space Detection and Tracking for Underground and Indoor Environments Jae Kyu Suhr, Member, IEEE, and Ho Gi Jung, Senior Member, IEEE Abstract Even though many public parking lots are

More information

A New Concept on Automatic Parking of an Electric Vehicle

A New Concept on Automatic Parking of an Electric Vehicle A New Concept on Automatic Parking of an Electric Vehicle C. CAMUS P. COELHO J.C. QUADRADO Instituto Superior de Engenharia de Lisboa Rua Conselheiro Emídio Navarro PORTUGAL Abstract: - A solution to perform

More information

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,

More information

Option Driver Assistance. Product Information

Option Driver Assistance. Product Information Product Information Table of Contents 1 Overview... 3 1.1 Introduction... 3 1.2 Features and Advantages... 3 1.3 Application Areas... 4 1.4 Further Information... 5 2 Functions... 5 3 Creating the Configuration

More information

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds 1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University

More information

Development of Vision System on Humanoid Robot HRP-2

Development of Vision System on Humanoid Robot HRP-2 Development of Vision System on Humanoid Robot HRP-2 Yutaro Fukase Institute of Technology, Shimizu Corporation, Japan fukase@shimz.co.jp Junichiro Maeda Institute of Technology, Shimizu Corporation, Japan

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

Optical Sensors: Key Technology for the Autonomous Car

Optical Sensors: Key Technology for the Autonomous Car Optical Sensors: Key Technology for the Autonomous Car Rajeev Thakur, P.E., Product Marketing Manager, Infrared Business Unit, Osram Opto Semiconductors Autonomously driven cars will combine a variety

More information

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,

More information

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2 6th International Conference on Electronic, Mechanical, Information and Management (EMIM 2016) Application of Geometry Rectification to Deformed Characters Liqun Wang1, a * and Honghui Fan2 1 School of

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Forward Sensing System for LKS+ACC

Forward Sensing System for LKS+ACC SAE TECHNICAL PAPER SERIES 008-01-005 Forward Sensing System for LKS+ACC Ho Gi Jung, Yun Hee Lee and Pal Joo Yoon MANDO Corporation Jaihie Kim Yonsei University Reprinted From: Intelligent Vehicle Iniative

More information

HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD

HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD S. Nord, M. Lindgren, J. Spetz, RISE Research Institutes of Sweden Project Information

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

> Acoustical feedback in the form of a beep with increasing urgency with decreasing distance to an obstacle

> Acoustical feedback in the form of a beep with increasing urgency with decreasing distance to an obstacle PARKING ASSIST TESTING THE MEASURABLE DIFFERENCE. > Creation of complex 2-dimensional objects > Online distance calculations between moving and stationary objects > Creation of Automatic Points of Interest

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

Accommodation Assessments for Vehicle Occupants using Augmented Reality

Accommodation Assessments for Vehicle Occupants using Augmented Reality Accommodation Assessments for Vehicle Occupants using Augmented Reality Byoung-keon Daniel Park 1 and Matthew P. Reed 1 1 University of Michigan, Ann Arbor, MI 48109, USA keonpark@umich.edu Abstract. This

More information

Automatic Testing of an Autonomous Parking System using Evolutionary Computation

Automatic Testing of an Autonomous Parking System using Evolutionary Computation --9 Automatic Testing of an Autonomous Parking System using Evolutionary Computation Copyright SAE International Oliver Bühler STZ Softwaretechnik Joachim Wegener DaimlerChrysler AG, Research and Technology

More information

Fast Local Planner for Autonomous Helicopter

Fast Local Planner for Autonomous Helicopter Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system

More information

3DSA

3DSA Proceedings of the International Conference on 3D Systems and Applications 3DSA 2010 www.3dsa.org/ i International Conference on 3D Systems and Applications General Academy Center, Tokyo, Japan, May 19-21,

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Consolidated Financial Results for Fiscal 2016 (As of March 2017)

Consolidated Financial Results for Fiscal 2016 (As of March 2017) Consolidated Financial Results for Fiscal 2016 (As of March 2017) May 16, 2017 Clarion Co., Ltd. 1.Outline of Consolidated Financial Results for Fiscal 2016 2.Medium Term Management Plans 3.Medium Term

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Implementation of Smart Car Infotainment System including Black Box and Self-diagnosis Function

Implementation of Smart Car Infotainment System including Black Box and Self-diagnosis Function , pp.267-274 http://dx.doi.org/10.14257/ijseia.2014.8.1.23 Implementation of Smart Car Infotainment System including Black Box and Self-diagnosis Function Minyoung Kim 1, Jae-Hyun Nam 2 and Jong-Wook Jang

More information

Fast Face Detection Assisted with Skin Color Detection

Fast Face Detection Assisted with Skin Color Detection IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 4, Ver. II (Jul.-Aug. 2016), PP 70-76 www.iosrjournals.org Fast Face Detection Assisted with Skin Color

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Detecting and recognizing centerlines as parabolic sections of the steerable filter response

Detecting and recognizing centerlines as parabolic sections of the steerable filter response Detecting and recognizing centerlines as parabolic sections of the steerable filter response Petar Palašek, Petra Bosilj, Siniša Šegvić Faculty of Electrical Engineering and Computing Unska 3, 10000 Zagreb

More information

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Outdoor Robot Navigation Based on View-based Global Localization and Local Navigation Yohei Inoue, Jun Miura, and Shuji Oishi Department

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Efficient Stereo Image Rectification Method Using Horizontal Baseline Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,

More information

A deformable model driven method for handling clothes

A deformable model driven method for handling clothes A deformable model driven method for handling clothes Yasuyo Kita Fuminori Saito Nobuyuki Kita Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology (AIST) AIST

More information

Geometric Calibration in Active Thermography Applications

Geometric Calibration in Active Thermography Applications 19 th World Conference on Non-Destructive Testing 2016 Geometric Calibration in Active Thermography Applications Thomas SCHMIDT 1, Christoph FROMMEL 2 1, 2 Deutsches Zentrum für Luft- und Raumfahrt (DLR)

More information

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving 3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving Quanwen Zhu, Long Chen, Qingquan Li, Ming Li, Andreas Nüchter and Jian Wang Abstract Finding road intersections in advance is

More information

DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1

DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1 DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1 Dept. of Electrical Engineering and Computer Science Korea Advanced Institute of Science

More information

Correcting Radial Distortion of Cameras With Wide Angle Lens Using Point Correspondences

Correcting Radial Distortion of Cameras With Wide Angle Lens Using Point Correspondences Correcting Radial istortion of Cameras With Wide Angle Lens Using Point Correspondences Leonardo Romero and Cuauhtemoc Gomez Universidad Michoacana de San Nicolas de Hidalgo Morelia, Mich., 58000, Mexico

More information

3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data

3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) 3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data Gyu seong Choi a, S.W.

More information

Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection

Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection Lili Huang, Student Member, IEEE, and Matthew Barth, Senior Member, IEEE Abstract In many driver assistance systems and autonomous

More information

A novel approach to classify human-motion in smart phone using 2d-projection method

A novel approach to classify human-motion in smart phone using 2d-projection method A novel approach to classify human-motion in smart phone using 2d-projection method 1 Yi Suk Kwon, 1 Yeon Sik Noh, 1 Ja Woong Yoon, 1 Sung Bin Park, 1 Hyung Ro Yoon 1 Department of Biomedical Engineering

More information

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT 1 SOE YU MAUNG MAUNG, 2 NU NU WIN, 3 MYINT HTAY 1,2,3 Mechatronic Engineering Department, Mandalay Technological University, The Republic

More information

Stereo Rectification for Equirectangular Images

Stereo Rectification for Equirectangular Images Stereo Rectification for Equirectangular Images Akira Ohashi, 1,2 Fumito Yamano, 1 Gakuto Masuyama, 1 Kazunori Umeda, 1 Daisuke Fukuda, 2 Kota Irie, 3 Shuzo Kaneko, 2 Junya Murayama, 2 and Yoshitaka Uchida

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM

CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE-AFASES 2016 CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM Titus CIOCOIU, Florin MOLDOVEANU, Caius SULIMAN Transilvania University, Braşov, Romania (ciocoiutitus@yahoo.com,

More information

Long-term motion estimation from images

Long-term motion estimation from images Long-term motion estimation from images Dennis Strelow 1 and Sanjiv Singh 2 1 Google, Mountain View, CA, strelow@google.com 2 Carnegie Mellon University, Pittsburgh, PA, ssingh@cmu.edu Summary. Cameras

More information

Preceding vehicle detection and distance estimation. lane change, warning system.

Preceding vehicle detection and distance estimation. lane change, warning system. Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute

More information

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Sumit Jha and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information