Forward Sensing System for LKS+ACC
|
|
- Brittany Parker
- 5 years ago
- Views:
Transcription
1 SAE TECHNICAL PAPER SERIES Forward Sensing System for LKS+ACC Ho Gi Jung, Yun Hee Lee and Pal Joo Yoon MANDO Corporation Jaihie Kim Yonsei University Reprinted From: Intelligent Vehicle Iniative (IVI) Technology Controls and Navigation Systems, 008 (SP-193) 008 World Congress Detroit, Michigan April 14-17, Commonwealth Drive, Warrendale, PA U.S.A. Tel: (74) Fax: (74) Web:
2 By mandate of the Engineering Meetings Board, this paper has been approved for SAE publication upon completion of a peer review process by a minimum of three (3) industry experts under the supervision of the session organizer. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of SAE. For permission and licensing requests contact: SAE Permissions 400 Commonwealth Drive Warrendale, PA USA permissions@sae.org Tel: Fax: For multiple print copies contact: SAE Customer Service Tel: (inside USA and Canada) Tel: (outside USA) Fax: CustomerService@sae.org ISSN Copyright 008 SAE International Positions and opinions advanced in this paper are those of the author(s) and not necessarily those of SAE. The author is solely responsible for the content of the paper. A process is available by which discussions will be printed with the paper if it is published in SAE Transactions. Persons wishing to submit papers to be considered for presentation or publication by SAE should send the manuscript or a 300 word abstract of a proposed manuscript to: Secretary, Engineering Meetings Board, SAE. Printed in USA
3 Forward Sensing System for LKS+ACC Ho Gi Jung, Yun Hee Lee and Pal Joo Yoon MANDO Corporation Jaihie Kim Yonsei University Copyright 008 SAE International ABSTRACT This paper discusses the market trends and advantages of a safety system integrating LKS (Lane Keeping System) and ACC (Adaptive Cruise Control), referred to as the LKS+ACC system, and proposes a utilizing the range data from ACC for the sake of lane detection. The overall structure of lane detection is the same as the conventional using monocular vision: EDF (Edge Distribution Function)-based initialization, sub-roi (Region Of Interest) for left/right and distance-based layers, steerable filter-based feature extraction, and model fitting in each sub-roi. The proposed adds only the system for confining lane detection ROI to free space that is established by range data. Experimental results indicate that such a simple adaptive ROI can overcome occlusion of lane markings and disturbance of neighboring vehicles. INTRODUCTION MARKET TRENDS OF THE LKS+ACC SYSTEM ACC is a driver convenience system adding headway time control, which maintains distance to the preceding vehicle within a preset headway time, to conventional cruise control that maintains preset speed if there is no preceding vehicle. LKS is a driver convenience system maintaining its driving lane. These two systems have been developed as two separate systems [1]. However, as the adoption rate of ACC is rising and various marketable embedded vision systems are emerging, the LKS+ACC system integrating two functions attracts more interest. Major Japanese automakers have already produced LKS+ACC systems. LKS of Toyota (or Lexus) maintains its driving lane only if ACC is operating. If ACC is not operating, it will warn the driver of lane departure by torque pulse [, 3]. Application vehicles include Lexus LS460 [4] and Crown Majesta [5]. Nissan has also developed a system integrating LKS and ACC [6], which it has applied to Cima [7]. Honda developed the HiDS (Honda Intelligent Driver Support System) integrating IHCC (Intelligent Highway Cruise Control) corresponding to ACC and LKAS (Lane Keeping Assist System) corresponding to LKS [8]. Application vehicles are Accord [9], Legend [10], and Inspire [11]. CHAUFFEUR II is a European project that was completed in 003, aimed at developing truck platoon and integration of LKS and ACC. Especially, the project proposed a system integrating LKS and SDK (Smart Distance Keeping) corresponding to ACC, and named it CHAUFFEUR Assistance [1]. ADVANTAGES OF LKS+ACC SYSTEM Driver s workload A considerable portion of traffic accidents are caused by driver carelessness and improper driving maneuvers. In particular, the burden of long hours of driving causes drivers to be fatigued resulting in traffic accidents. Although conventional ACC and LKS can relieve the driver s workload, the LKS+ACC system is expected to provide greater workload relief. Analyzing the effect of CHAUFFEUR Assistance on the driver using a driving simulator certified that driving stability was enhanced and the driver s weariness was reduced compared with separate systems [13]. The result of vehicle testing of the Honda HiDS showed that 88% of test subjects felt their workload was reduced. Eye gaze pattern analysis indicated that drivers with HiDS observed a wider FOV (Field Of View) [14]. Traffic system capacity The analysis result of CHAUFFEUR Assistance on the driver showed that drivers tended to maintain smaller headway time and change lanes less [13]. It was analyzed that the LKS+ACC system gave more increase of traffic capacity than separate LKS and ACC. Experts predicted that the LKS+ACC system would give remarkable increase of traffic capacity when the lane width was narrow [15].
4 Control performance In the aspect of lane keeping control, if ACC is not operating, it is hard to predict TTC (Time To Cross). Contrarily, if ACC controls vehicle speed, LKS can easily design and follow driving trajectory. As a result, control performance will be enhanced [16]. In the aspect of ACC, if preceding roadway information acquired by LKS is provided, ACC can implement proper speed control considering the road shape. For example, speed control on curves realizes cruise control that suits the driver s feeling by controlling the speed according to the curve shape. Speed control at exits contributes to reduction of the driver s operating load by controlling deceleration when a car enters an exit lane [17]. Recognition performance Using lane information acquired by LKS, ACC can recognize the preceding vehicle in a curved road. Preceding vehicle detection using only LRR (Long Range Radar) is supposed to be complicated because it should eliminate noise from vehicle movement and vibration. Radar-based obstacle recognition can be enhanced by using image portion corresponding to the obstacle s position. One of the major disturbances of lane detection is occlusion by the preceding vehicle. Therefore, position information of the preceding vehicle makes the lane detection algorithm simpler and more robust. Otherwise, the lane detection algorithm is supposed to be complicated because it should handle various cases including the preceding vehicle occlude lane markings. ECU integration In order to enhance recognition performance of LKS and ACC, low-level fusion between image information and range information is essential. Low-level fusion between separate LKS and ACC requires over-weighted traffic load on the communication channel. In order to enhance control performance, an extended vehicle model incorporating lateral and longitudinal motion is needed and the vehicle trajectory should be designed comprehensively. Therefore, it is expected that integrated ECU (Electronic Control Unit) with high performance be implemented in one vehicle model. Denso supplied LKS+ACC ECU to Toyota, who in turn developed the fusion ECU, which processes all sensor information including vision sensor, radar sensor, and Lidar sensor and sends control commands to the active steering system and active braking system [17, 18]. Hand Side) again. In these six regions, lane features are searched locally. Lane feature pixels are detected by steerable filter and are approximated into a line or a parabola. The orientation of steerable filter is initialized by peak detection of the EDF (Edge Distribution Function), and then established according to lane feature state predicted by temporal tracking. Regions of the lowest layer are fixed but regions of the second and third layer are set dynamically. The conventional lane detection system works well when there is no obstacle in the vicinity. Recently, as the HDRC (High Dynamic Range CMOS) camera is adopted, traditional problems such as driving against the sun and tunnels are overcome [1]. However, if the preceding vehicle occludes lane markings or a vehicle in the adjacent lane approaches, lane features become lost or too small. As a result, edges of the obstacle start to disturb lane detection. To overcome such problems, ROI establishment based on precise trajectory prediction using vehicle motion sensors and lane feature verification-based outlier rejection are incorporated [19]. Assuming that disturbance of neighboring vehicles occurs because the system has no knowledge about free space, this paper proposes that simple confinement of ROI to free space can efficiently prevent the disturbance of neighboring vehicles. Furthermore, in the case of the LKS+ACC system, because a range sensor is already installed for the sake of ACC function, lane detection performance can be improved without sensor addition. Experimental results confirm that the proposed can detect lanes successfully even in the case when conventional s fail because of neighboring vehicles. CONVENTIOANL SYSTEM: MONOCULAR VISION-BASED LANE DETECTION THREE LAYERED ROI STRUCTURE Lane markings have different shapes according to the road shape as shown in Fig. 1. If the road is straight as in Fig. 1(a), all lane markings, both near and far, can be approximated as a straight line. If the road is curved as in Fig. 1(b), lane markings at near and far distances should be approximated as a straight line and a curve respectively. ADAPTIVE ROI-BASED LANE DETECTION The lane detection proposed by this paper is fundamentally based on the monocular vision-based lane detection published by [19] and [0]. The forward scene is divided into three layers according to distance and divided into LHS (Left Hand Side) and RHS (Right (a) Straight road (b) Curved road Fig. 1. Lane shape depends on road shape.
5 ROI should be established such that the searching area is minimized but still contains the lane features. Desirable ROI is expected to include lane features and exclude image portion belonging to other objects. Considering the fact that the lane becomes smaller as distance increases, the searching area is divided into three layers whose size decrease gradually, and then divided into LHS and RHS. Consequently, six sub-rois are established. The height of available searching area changes according to camera configuration and the height of each layer is defined as the ratio to the height of available searching area. Sub-ROI I and IV near to the subjective vehicle is established fixedly and sub- ROIs of second and third layer are established using the lane detection result of their lower layer [0]. In other words, the detected lane of sub-roi I determines the location of sub-roi II and the detected lane of sub-roi II determines the location of sub-roi III again. G 90 1 e y ( x y ) ye ( x y ) (3) The first derivative of (1) in a specific direction is defined using () and (3) like (4) [0, ]. The filter defined in (4) outputs strong response to edges perpendicular to the specific direction and outputs weaker response as the angular difference increases. Therefore, because the possibility that edges of shadow and stain have the same orientation as lane feature is very low, steerable filter tuned using a priori known lane feature direction can selectively detect lane feature, more exactly lane feature pixels. Fig. 3(a) is an input image and (b) and (c) are the outputs of steerable filter tuned to -45 and 45 respectively. G 0 1 cos( ) G1 sin( ) G 90 1 (4) (a) Input image Fig.. Three layered ROI structure. STEERABLE FILTERING Lanes appear as slanted edge lines in the lane searching region. If the slope of the lane feature is known a priori, the steerable filter can detect lane features more efficiently than general edge detection s [19, 0]. The steerable filter is defined using the D (two dimensional) Gaussian function of (1). If the lane marking is regarded as a line having width, then second derivative is used [19]. If the inner edge of lane marking is regarded as lane feature, then first derivative is used [0, ]. In our research, first derivatives defined as in () and (3) are used. Equation () is the derivative of (1) in x-axis direction ( =0 ) and (3) is the derivative of (1) in y-axis direction ( =90 ). It is noteworthy that the equations from (1) to (3) define D masks. ( x y ) G( x, y) e (1) G 0 1 e x ( x y ) xe ( x y ) () (b) Output ( =-45 ) (c) Output ( =45 ) Fig. 3. Lane feature pixels detected by tuned steerable filter. EDGE DISTRIBUTION FUNCTION EDF is used to initialize the orientation parameter of steerable filter. EDF is the histogram of edge pixel direction with respect to angle [0, 3]. Equation (5) defines the gradient of pixel (x,y). D x denotes the intensity variation with respect to x-axis and D y denotes the intensity variation with respect to y-axis. Gradient is approximated by Sobel operator. With D x and D y, edge direction at pixel (x,y) is defined as in (6). After edge direction of all pixels in ROI is calculated using (6), EDF can be constructed by accumulating pixel occurrence with respect to edge direction. Fig. 4(a) shows the Sobel operator result of Fig. 3(a) and Fig. 4(b) is the constructed EDF.
6 T I I T I ( x, y), ( Dx, Dy ) (5) x y case of the other layers, it is set by lane feature state tracking, which will be explained subsequently. D 1 y ( x, y) tan (6) Dx Fig. 5. Initial lane feature detection. (a) Gradient image by Sobel operator Initial lane feature found by Hough transform is a linear approximation of pixels showing strong response to the steerable filter tuned to a specific direction using voting. Therefore, although it can show the overall structure of lane features, it contains noise to some extent. By searching edge point from lane feature to image center, inner edge points are detected as shown in Fig. 6 [0] (b) EDF and detected peaks Fig. 4. EDF construction and peak detection. After dividing EDF into two regions with respect to 90, the maximum peak of each region is detected as shown in Fig. 4(b). The left portion is corresponding to sub-roi I of Fig. and the right portion is corresponding to sub- ROI IV. As mentioned before, lane feature in the lowest layer can be approximated by a line and the angle of detected peak represents the direction of lane feature of each sub-roi. Therefore, the angle corresponding to the detected peak is used for the initialization of orientation parameter of the steerable filter. LANE FEATURE DETECTION Lane feature detection consists of steerable filtering, Hough transform, inner edge point detection, and model fitting. Fig. 5 presents the procedure of initial lane feature detection. Steerable filter tuned to a priori known lane direction and binarization detect lane feature pixels. Using the lane feature pixels, Hough transform finds the lane feature. In the case of the lowest layer, the orientation of steerable filter is set using EDF and in the Fig. 6. Detected inner edge points. Inner edge points detected in the first layer are fitted into a line represented by (7). Horizontal image direction is x- axis and vertical image direction is y-axis. The crosspoint of the line and a border between the first and the second layer is used as the center x coordinates of second layer sub-roi [0]. y a x b (7) In the second layer sub-roi, lane feature pixels are detected by steerable filter and then inner edge points are detected. Detected inner edge points are fitted into a curve represented by (8). The cross-point of the curve defined by quadratic fitting and a border between the second and the third layer is used as the center x coordinates of the third layer sub-roi [0]. y a x b x c (8)
7 Fig. 7. Dynamically established second and third sub- ROIs. LANE FEATURE TRACKING The left and right lines are determined by fitting inner edge points detected in the three layers respectively for LHS and RHS of an image. Then, the orientation and offset of the left lane and the orientation and offset of the right lane are used as the lane feature state. Lane feature state is tracked by Kalman filter such that it is robust to external disturbance. Lane feature state is tracked in such a way to be used for the setting of direction parameter of the steerable filter in the next frame. Furthermore, it is used as lane information for lane keeping control and preceding vehicle recognition. Using the lane feature state instead of instantaneous lane feature detected in each frame prevents performance degradation in the case when lane marking is disconnected or occluded by neighboring vehicles. Fig. 8 presents an example of tracked lane feature state. Fig. 8. Tracked lane feature state is the output of lane detection. RANGE DATA-BASED ROI ESTABLISHMENT According to a recently published survey about visionbased lane detection, vision-based lane detection generally consists of five components: road marking extraction, post-processing, road modeling, vehicle modeling, and position tracking [19]. Reviewing the development direction of each component, one common objective can be realized. The main challenge of road marking extraction is overcoming external disturbance such as shadow and stain and focusing only on lane feature. Steerable filter used in this paper is developed to improve lane detection performance by focusing on edges having expected orientation. Post-processing is aimed at eliminating falsely detected lane feature caused by external disturbance using a priori knowledge regarding road and lane. Road modeling, vehicle modeling, and position tracking are aimed at efficiently narrowing the searching area by formularizing lane marking shape, vehicle motion, and lane marking motion. In other words, they are developed to establish ROI only at a region where lane feature is expected to appear in the next frame considering current position of lane marking, vehicle motion, and lane marking structure. Consequently, external disturbance can be ignored and lane detection performance can be improved. The common objective of component development is minimizing the effect of external disturbance. We pay attention to the fact that external disturbance is inevitable because it is caused from dimension reduction from 3D world to D image. This means once the external disturbance can be identified in advance, complicated post-processing and modeling can be simplified. Assuming the most important external disturbance of lane detection is neighboring objects including the preceding vehicle, adjacent vehicles, and guide rail, it can be expected that simply by confining lane feature searching area to free space ensured by range data, lane detection performance will be improved. When the preceding vehicle approaches near to the subjective vehicle, it occludes lane markings, and edges of its appearance can be falsely detected as lane feature. Because side surface edges of an adjacent vehicle are almost parallel to lane marking, it can be falsely recognized as lane feature when the adjacent vehicle approaches near to the subjective vehicle or wrongly established ROI is used. The shadow of an adjacent vehicle causes many problems even when the adjacent vehicle does not approach near to the subjective vehicle. Especially, a cutting-in vehicle is external disturbance hard to be identified as it is related with the update speed of lane feature tracking (i.e. response time). However, it is found that once road surface covered by vehicles is rejected using range data, lane detection can simply ignore all edges generated by the appearance of neighboring objects. Furthermore, it is noteworthy that such procedure can be implemented by simple operation, which is finding image position corresponding to range data and masking off the area from ROI. When input image pixel coordinates are denoted by (x i,y i ) and world coordinates of range sensor are denoted by (X w,y w,z w ), two coordinates are related by homography H as in (9) and (10). X b xi Y b y H i (9) Z b 1 Xw Xb / Zb Z w Yb / Z b (10)
8 In order to acquire coordinates of road surface, Y w is set to 0. Homography H of (9) is defined as in (11). h c denotes camera height and and denote yaw angle and tilt angle of camera respectively [4]. by a rectangle, whose four corners are established to be located in the free space. hccos hcsin sin f coshcsin H hc cos hc cos sin f cos hc cos 0 cos f sin (11) Fig. 9(a) shows range data acquired by scanning laser radar. Range data is acquired in the Polar coordinate system and then it is transformed into the Cartesian coordinate system. Fig. 9(b) indicates range data projected onto input image. It is observable that positions where vehicles and guide rail meet road surface are successfully detected. However, range data is disconnected in several positions and contains noise. Fig. 10. Recognized free space. EXPERIMENTAL RESULTS In order to verify the feasibility of proposed range databased adaptive ROI establishment, we installed scanning laser radar and a camera on the test vehicle and compared lane detection performance of the proposed and conventional. (a) Range data acquired by scanning laser radar (a) Input image (b) Range data projected onto input image Fig. 9. Acquired range data in world coordinate system and image coordinate system. (b) Lane feature pixels in LHS by conventional (c) Lane detection result by conventional Clustering range data eliminates disconnected and added noise. Scanning consecutive range data points, if two range data points are far more than a threshold, e.g. 50cm, then they are recognized as a border between two range data clusters. Among recognized clusters, clusters with too small points or too short length are eliminated and then the deleted region is interpolated by adjacent clusters. The area below the border line consisting of recognized range data clusters and the sky line is recognized as free space, to which lane feature searching region is confined. Fig. 10 is the example of recognized free space. Each of six sub-rois is defined (d) Lane feature pixels in LHS by proposed (e) Lane detection result by proposed Fig. 11. Comparison when adjacent vehicle approaches.
9 Fig. 11 shows that the proposed adaptive ROI can overcome disturbance of an adjacent vehicle. Fig. 11(a) displays input image and (b) and (d) show lane feature pixels detected in the LHS of input image by conventional and proposed respectively. It is observable that the bottom edge of the adjacent vehicle looks similar with lane feature. Fig. 11 (c) and (e) indicate lane feature state detected by conventional and proposed respectively. They show that the problem that left lane feature is wrongly detected by the conventional in Fig. 11(c) can be solved by the proposed as shown in Fig. 11(e). It is noteworthy that neighboring vehicles are excluded in the free space, which is depicted in Fig. 11(e). information and then maintains tracked lane feature state to output proper lane information. Fig. 13 is a situation when there are little observable lane markings. As the proposed eliminates image portion occupied by the preceding vehicle, it can focus on observable lane markings. Contrarily, the conventional fails because of a vehicle s edges. (a) Input image (a) Input image (b) Lane feature pixels in LHS by conventional (c) Lane detection result by conventional (b) Lane feature pixels in LHS by conventional (c) Lane detection result by conventional (d) Lane feature pixels in LHS by proposed (e) Lane detection result by proposed Fig. 13. Comparison when little lane markings are observable. (d) Lane feature pixels in LHS by proposed (e) Lane detection result by proposed Fig. 1. Comparison when the preceding vehicle occludes left lane markings wholly. Fig. 1 and Fig. 13 show examples when the proposed overcomes problems caused by the preceding vehicle. Fig. 1 is a situation when lane marking is disconnected at the current location and the preceding vehicle occludes remaining lane marking so that there is no useful information about left lane feature. The proposed realizes that there is no useful Fig. 14 demonstrates that the proposed can successfully detect lanes in various situations. Fig. 14(a) is a situation when there is wide free space in front of the subjective vehicle. Fig. 14(b) and (c) show situations when there are a lot of shadows on the road surface. Fig. 14(d) indicates a situation when the cuttingin vehicle occludes right lane markings. In this case, although lane markings in the near area are occluded, tracked lane feature state helps the finding lane markings in the far area.
10 REFERENCES (a) General case (c) With wall shadow Fig. 14. Successful cases. (b) With tree shadow (d) With cutting-in vehicle Although the proposed overcomes various problems experienced by the conventional, it cannot overcome a situation when there are many traffic signs on a road surface until now. Fig. 15 shows a situation when the proposed fails to detect lane markings because of traffic signs on the road surface. Fig. 15. Failure because of traffic signs on road surface. CONCLUSION This paper proposes a which prevents external disturbance caused by neighboring vehicles by confining lane detection ROI to free space confirmed by range data. The main contribution of this paper is showing that a range sensor can enhance lane detection performance and simplify lane detection algorithm. Especially, the proposed approach confining ROI based on range data can be implemented by CAN (Control Area Network) communication even if ACC and LKS are implemented in separated ECUs as in conventional implementation. Therefore, this approach requires only little change and is easy to be adopted. Future works are 1) countermeasure to overcome the disturbance of traffic signs on road surface and ) replacement of scanning laser radar with high angular resolution used in this paper with Lidar or Radar with low angular resolution. 1. Richard Bishop, Intelligent Vehicle Technology and Trends, Artech House Inc., Toyota, Environmental & Social Report 004, wnload/pdf/e_s_report_004.pdf 3. The Tundra Solutions, Fourth-Generation Lexus Flagship Luxury Sedan Features, 4. Lexus Europe, LS460-Advanced Safety-Lane Keeping Assist, 5. The Auto Channel, Toyota Crown Majesta Undergoes Complete Redesign, 77.html 6. University of Twente, Summary_seminar: combination and/or integration of longitudinal and lateral support, eminar/ 7. Nissan, Nissan Releases All-New Cima, global.com/en/news/001/_story/ html 8. Honda, Safety for everyone in our mobile society, Initiatives.pdf 9. PistonHeads, HONDA ADAS, Honda, Honda Introduces the All-New Legend, Honda, Honda Announces a Full Model Change for the Inspire, 1. Hans Fritz (DaimlerChrysler), et al., CHAUFFEUR Assistant: A Driver Assistance System for Commercial Vehicles based on Fusion of Advanced ACC and Lane Keeping, 004 IEEE Intelligent Vehicle Symposium, Jeroen Hogema (TNO), Driving behavior effects of the Chauffeur Assistant, es_jh.ppt 14. Richard Bishop (Bishop Consulting), Societal Benefits of In-Car Technology, ppt 15. Bart van Arem, Govert Schermers, Exploration of the traffic flow impacts of combined lateral and longitudinal support, es_gs.ppt 16. J. H. Cho, H. K. Nam, and W. S. Lee, Driver Behavior with Adaptive Cruise Control, International
11 Journal of Automotive Technology, Vol. 7, No. 5, 006, pp Denso, Sensing System. singsystem_e.pdf 18. Denso, 11 th ITS World Congress Exhibited Product Lineup, html 19. Joel C. McCall and Mohan M. Trivedi, Video-Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation, IEEE Transactions on Intelligent Transportation Systems, Vol. 7, No. 1, March 006, pp GUO Lei, LI Keqiang, WANG Jianqiang, LIAN Xiaomin, A Robust Lane Detection Method Using Steerable Filters, Proceedings of AVEC 06 (The 8 th International Symposium on Advanced Vehicle Control), August 0-4, B. Hoefflinger, High-Dynamic-Range (HDR) Vision, Springer Berlin Heidelberg, K. Mineta, Development of a Lane Mark Recognition System for a Lane Keeping Assist System, SAE Paper No.: , M. Nishida, S. Kawakami, and A. Watanabe, Development of Lane Recognition Algorithm for Steering Assistance System, SAE Paper No.: , C. R. Jung and C. R. Kelber, A Robust Linear- Parabolic Model for Lane Follow, Proceedings of the XVII Brazilian Symposium in Computer Graphics and Image Processing, Curitiba, Brazil, Vol. 10, pp. 7-79, 004. CONTACT Corresponding Author: Ho Gi Jung MANDO Corporation Global R&D H. Q , Gomae-Dong, Giheung-Gu, Yongin-Si, Kyonggi- Do , Republic Of Korea hgjung@mando.com, hgjung@yonsei.ac.kr Web:
Stereo Vision Based Advanced Driver Assistance System
Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationStructure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System
Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Ho Gi Jung 1, 2, Dong Suk Kim 1, Pal Joo Yoon 1, and Jaihie Kim 2 1 MANDO Corporation Central R&D Center, Advanced
More informationSemi-automatic Parking System Recognizing Parking Lot Markings
Proceedings of AVEC 06 The 8 th International Symposium on Advanced Vehicle Control, August 20-24, 2006, Taipei, Taiwan AVEC060186 Semi-automatic Parking System Recognizing Parking Lot Markings Ho Gi Jung
More informationSensor Fusion-Based Parking Assist System
Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:
More information차세대지능형자동차를위한신호처리기술 정호기
차세대지능형자동차를위한신호처리기술 008.08. 정호기 E-mail: hgjung@mando.com hgjung@yonsei.ac.kr 0 . 지능형자동차의미래 ) 단위 system functions 운전자상황인식 얼굴방향인식 시선방향인식 졸음운전인식 운전능력상실인식 차선인식, 전방장애물검출및분류 Lane Keeping System + Adaptive Cruise
More informationRECENTLY, customers have shown a growing interest in
406 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Scanning Laser Radar-Based Target Position Designation for Parking Aid System Ho Gi Jung, Member, IEEE, Young
More informationA Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters
ISSN (e): 2250 3005 Volume, 06 Issue, 12 December 2016 International Journal of Computational Engineering Research (IJCER) A Longitudinal Control Algorithm for Smart Cruise Control with Virtual Parameters
More informationVision-based Frontal Vehicle Detection and Tracking
Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga,
More informationSTEREO VISION-BASED FORWARD OBSTACLE DETECTION
International Journal of Automotive Technology, Vol. 8, No. 4, pp. 493 504 (2007) Copyright 2007 KSAE 1229 9138/2007/035 12 STEREO VISION-BASED FORWARD OBSTACLE DETECTION H. G. JUNG 1),2)*, Y. H. LEE 1),
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationAdvanced Driver Assistance: Modular Image Sensor Concept
Vision Advanced Driver Assistance: Modular Image Sensor Concept Supplying value. Integrated Passive and Active Safety Systems Active Safety Passive Safety Scope Reduction of accident probability Get ready
More informationNOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM
F2006D130T NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM 1,2 Jung, Ho Gi, 1 Kim, Dong Suk *, 1 Yoon, Pal Joo, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea, 2 Yonsei University,
More informationRectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform
Send Orders for Reprints to reprints@benthamscience.net 58 The Open Mechanical Engineering Journal, 2014, 8, 58-62 Open Access Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More informationVehicle Detection Method using Haar-like Feature on Real Time System
Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.
More informationLANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY
LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY 1 K. Sravanthi, 2 Mrs. Ch. Padmashree 1 P.G. Scholar, 2 Assistant Professor AL Ameer College of Engineering ABSTRACT In Malaysia, the rate of fatality due
More informationFundamental Technologies Driving the Evolution of Autonomous Driving
426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo
More informationAdvanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module
Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output
More informationDetecting and recognizing centerlines as parabolic sections of the steerable filter response
Detecting and recognizing centerlines as parabolic sections of the steerable filter response Petar Palašek, Petra Bosilj, Siniša Šegvić Faculty of Electrical Engineering and Computing Unska 3, 10000 Zagreb
More informationA threshold decision of the object image by using the smart tag
A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (
More informationPreceding vehicle detection and distance estimation. lane change, warning system.
Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute
More informationIntegrated Vehicle and Lane Detection with Distance Estimation
Integrated Vehicle and Lane Detection with Distance Estimation Yu-Chun Chen, Te-Feng Su, Shang-Hong Lai Department of Computer Science, National Tsing Hua University,Taiwan 30013, R.O.C Abstract. In this
More informationA Road Marking Extraction Method Using GPGPU
, pp.46-54 http://dx.doi.org/10.14257/astl.2014.50.08 A Road Marking Extraction Method Using GPGPU Dajun Ding 1, Jongsu Yoo 1, Jekyo Jung 1, Kwon Soon 1 1 Daegu Gyeongbuk Institute of Science and Technology,
More informationLane Markers Detection based on Consecutive Threshold Segmentation
ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 6, No. 3, 2011, pp. 207-212 Lane Markers Detection based on Consecutive Threshold Segmentation Huan Wang +, Mingwu Ren,Sulin
More informationDesigning Automotive Subsystems Using Virtual Manufacturing and Distributed Computing
SAE TECHNICAL PAPER SERIES 2008-01-0288 Designing Automotive Subsystems Using Virtual Manufacturing and Distributed Computing William Goodwin and Amar Bhatti General Motors Corporation Michael Jensen Synopsys,
More informationDETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI
DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI Institute of Industrial Science, University of Tokyo 4-6-1 Komaba, Meguro-ku,
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationAn Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow
, pp.247-251 http://dx.doi.org/10.14257/astl.2015.99.58 An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow Jin Woo Choi 1, Jae Seoung Kim 2, Taeg Kuen Whangbo
More informationReal-Time Detection of Road Markings for Driving Assistance Applications
Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca
More informationDevelopment of MADYMO Models of Passenger Vehicles for Simulating Side Impact Crashes
SAE TECHNICAL PAPER SERIES 1999-01-2885 Development of MADYMO Models of Passenger Vehicles for Simulating Side Impact Crashes B. R. Deshpande, T. J. Gunasekar, V. Gupta and S. Jayaraman EASi S. M. Summers
More informationCover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data
Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417
More informationChange detection using joint intensity histogram
Change detection using joint intensity histogram Yasuyo Kita National Institute of Advanced Industrial Science and Technology (AIST) Information Technology Research Institute AIST Tsukuba Central 2, 1-1-1
More informationA Street Scene Surveillance System for Moving Object Detection, Tracking and Classification
A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi
More informationOption Driver Assistance. Product Information
Product Information Table of Contents 1 Overview... 3 1.1 Introduction... 3 1.2 Features and Advantages... 3 1.3 Application Areas... 4 1.4 Further Information... 5 2 Functions... 5 3 Creating the Configuration
More informationSHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues
SHRP 2 Safety Research Symposium July 27, 2007 Site-Based Video System Design and Development: Research Plans and Issues S09 Objectives Support SHRP2 program research questions: Establish crash surrogates
More informationTxDOT Video Analytics System User Manual
TxDOT Video Analytics System User Manual Product 0-6432-P1 Published: August 2012 1 TxDOT VA System User Manual List of Figures... 3 1 System Overview... 4 1.1 System Structure Overview... 4 1.2 System
More informationSnap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform
Snap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform Ravi Kumar Satzoda, Sean Lee, Frankie Lu, and Mohan M. Trivedi Abstract In the recent years, mobile computing platforms
More information2 OVERVIEW OF RELATED WORK
Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method
More informationAUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA
F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The
More informationVision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy
Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University
More informationLearning the Three Factors of a Non-overlapping Multi-camera Network Topology
Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy
More informationVideo Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation
SUBMITTED FOR REVIEW: IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, DECEMBER 2004, REVISED JULY 2005 1 Video Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationMachine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices
Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving
More informationEvaluating optical flow vectors through collision points of object trajectories in varying computergenerated snow intensities for autonomous vehicles
Eingebettete Systeme Evaluating optical flow vectors through collision points of object trajectories in varying computergenerated snow intensities for autonomous vehicles 25/6/2018, Vikas Agrawal, Marcel
More informationMATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM
J. KSIAM Vol.14, No.1, 57 66, 2010 MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM SUNHEE KIM, SEUNGMI OH, AND MYUNGJOO KANG DEPARTMENT OF MATHEMATICAL SCIENCES, SEOUL NATIONAL
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationAN ADAPTIVE MESH METHOD FOR OBJECT TRACKING
AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING Mahdi Koohi 1 and Abbas Shakery 2 1 Department of Computer Engineering, Islamic Azad University, Shahr-e-Qods Branch,Tehran,Iran m.kohy@yahoo.com 2 Department
More informationDetection and Classification of Vehicles
Detection and Classification of Vehicles Gupte et al. 2002 Zeeshan Mohammad ECG 782 Dr. Brendan Morris. Introduction Previously, magnetic loop detectors were used to count vehicles passing over them. Advantages
More informationMULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES
MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada
More informationReal-Time Lane Departure and Front Collision Warning System on an FPGA
Real-Time Lane Departure and Front Collision Warning System on an FPGA Jin Zhao, Bingqian ie and inming Huang Department of Electrical and Computer Engineering Worcester Polytechnic Institute, Worcester,
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationSensory Augmentation for Increased Awareness of Driving Environment
Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie
More information2 BAYESIAN NETWORK ESTIMATION
Naoki KAWASAKI In this paper, a new architecture for sensor fusion for Advanced Driver Assistant Systems (ADAS) is proposed. This architecture is based on Bayesian Network and plays the role of platform
More informationChapter 2 Trajectory and Floating-Car Data
Chapter 2 Trajectory and Floating-Car Data Measure what is measurable, and make measurable what is not so. Galileo Galilei Abstract Different aspects of traffic dynamics are captured by different measurement
More informationAuto-Digitizer for Fast Graph-to-Data Conversion
Auto-Digitizer for Fast Graph-to-Data Conversion EE 368 Final Project Report, Winter 2018 Deepti Sanjay Mahajan dmahaj@stanford.edu Sarah Pao Radzihovsky sradzi13@stanford.edu Ching-Hua (Fiona) Wang chwang9@stanford.edu
More informationLaserscanner Based Cooperative Pre-Data-Fusion
Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel
More informationAUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR
AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,
More informationHOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationLane Departure and Front Collision Warning Using a Single Camera
Lane Departure and Front Collision Warning Using a Single Camera Huei-Yung Lin, Li-Qi Chen, Yu-Hsiang Lin Department of Electrical Engineering, National Chung Cheng University Chiayi 621, Taiwan hylin@ccu.edu.tw,
More informationPreceding Vehicle Detection and Tracking Adaptive to Illumination Variation in Night Traffic Scenes Based on Relevance Analysis
Sensors 2014, 14, 15325-15347; doi:10.3390/s140815325 Article OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Preceding Vehicle Detection and Tracking Adaptive to Illumination Variation
More informationRoad Sign Detection and Tracking from Complex Background
Road Sign Detection and Tracking from Complex Background By Chiung-Yao Fang( 方瓊瑤 ), Sei-Wang Chen( 陳世旺 ), and Chiou-Shann Fuh( 傅楸善 ) Department of Information and Computer Education National Taiwan Normal
More informationLane Detection using Fuzzy C-Means Clustering
Lane Detection using Fuzzy C-Means Clustering Kwang-Baek Kim, Doo Heon Song 2, Jae-Hyun Cho 3 Dept. of Computer Engineering, Silla University, Busan, Korea 2 Dept. of Computer Games, Yong-in SongDam University,
More informationRobust lane lines detection and quantitative assessment
Robust lane lines detection and quantitative assessment Antonio López, Joan Serrat, Cristina Cañero, Felipe Lumbreras Computer Vision Center & Computer Science Dept. Edifici O, Universitat Autònoma de
More informationA Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA
Proceedings of the 3rd International Conference on Industrial Application Engineering 2015 A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Somchai Nuanprasert a,*, Sueki
More informationJournal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico.
Journal of Applied Research and Technology ISSN: 665-643 jart@aleph.cinstrum.unam.mx Centro de Ciencias Aplicadas y Desarrollo Tecnológico México Wu, C. F.; Lin, C. J.; Lin, H. Y.; Chung, H. Adjacent Lane
More informationMoving Object Detection for Video Surveillance
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Moving Object Detection for Video Surveillance Abhilash K.Sonara 1, Pinky J. Brahmbhatt 2 1 Student (ME-CSE), Electronics and Communication,
More informationSensor Data Fusion for Active Safety Systems
Sensor Data Fusion for Active Safety Systems 2010-01-2332 Published 10/19/2010 Jorge Sans Sangorrin, Jan Sparbert, Ulrike Ahlrichs and Wolfgang Branz Robert Bosch GmbH Oliver Schwindt Robert Bosch LLC
More informationComputers and Mathematics with Applications. Vision-based vehicle detection for a driver assistance system
Computers and Mathematics with Applications 61 (2011) 2096 2100 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Vision-based
More informationDeveloping an intelligent sign inventory using image processing
icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Developing an intelligent sign inventory using image
More informationCONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE
National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE
More informationFingerprint Mosaicking by Rolling with Sliding
Fingerprint Mosaicking by Rolling with Sliding Kyoungtaek Choi, Hunjae Park, Hee-seung Choi and Jaihie Kim Department of Electrical and Electronic Engineering,Yonsei University Biometrics Engineering Research
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationA NOVEL LANE FEATURE EXTRACTION ALGORITHM BASED ON DIGITAL INTERPOLATION
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 A NOVEL LANE FEATURE EXTRACTION ALGORITHM BASED ON DIGITAL INTERPOLATION Yifei Wang, Naim Dahnoun, and Alin
More informationAUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY
AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY G. Takahashi a, H. Takeda a, Y. Shimano a a Spatial Information Division, Kokusai Kogyo Co., Ltd., Tokyo, Japan - (genki_takahashi, hiroshi1_takeda,
More informationResearch on Evaluation Method of Video Stabilization
International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and
More informationparco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia
Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearbon (MI), USA October 3-5, 2000 Stereo Vision-based Vehicle Detection M. Bertozzi 1 A. Broggi 2 A. Fascioli 1 S. Nichele 2 1 Dipartimento
More informationMap Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2
Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2 1 Elektrobit Automotive GmbH, Am Wolfsmantel 46, 91058 Erlangen, Germany {AndreGuilherme.Linarth, Alexander.Doebert}@elektrobit.com
More informationComputer Aided Drafting, Design and Manufacturing Volume 26, Number 2, June 2016, Page 18
Computer Aided Drafting, Design and Manufacturing Volume 26, Number 2, June 2016, Page 18 CADDM The recognition algorithm for lane line of urban road based on feature analysis Xiao Xiao, Che Xiangjiu College
More informationIMPROVING ADAS VALIDATION WITH MBT
Sophia Antipolis, French Riviera 20-22 October 2015 IMPROVING ADAS VALIDATION WITH MBT Presented by Laurent RAFFAELLI ALL4TEC laurent.raffaelli@all4tec.net AGENDA What is an ADAS? ADAS Validation Implementation
More informationAvailable online at ScienceDirect. Procedia Computer Science 22 (2013 )
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 22 (2013 ) 945 953 17 th International Conference in Knowledge Based and Intelligent Information and Engineering Systems
More informationPedestrian Detection with Radar and Computer Vision
Pedestrian Detection with Radar and Computer Vision camera radar sensor Stefan Milch, Marc Behrens, Darmstadt, September 25 25 / 26, 2001 Pedestrian accidents and protection systems Impact zone: 10% opposite
More informationDesign Considerations And The Impact of CMOS Image Sensors On The Car
Design Considerations And The Impact of CMOS Image Sensors On The Car Intuitive Automotive Image Sensors To Promote Safer And Smarter Driving Micron Technology, Inc., has just introduced a new image sensor
More informationIdle Object Detection in Video for Banking ATM Applications
Research Journal of Applied Sciences, Engineering and Technology 4(24): 5350-5356, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: April 06, 2012 Published:
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationLand & Lee (1994) Where do we look when we steer
Automobile Steering Land & Lee (1994) Where do we look when we steer Eye movements of three subjects while driving a narrow dirt road with tortuous curves around Edinburgh Scotland. Geometry demanded almost
More informationDetection and Motion Planning for Roadside Parked Vehicles at Long Distance
2015 IEEE Intelligent Vehicles Symposium (IV) June 28 - July 1, 2015. COEX, Seoul, Korea Detection and Motion Planning for Roadside Parked Vehicles at Long Distance Xue Mei, Naoki Nagasaka, Bunyo Okumura,
More informationRobotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007
Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem
More informationA Fast Circular Edge Detector for the Iris Region Segmentation
A Fast Circular Edge Detector for the Iris Region Segmentation Yeunggyu Park, Hoonju Yun, Myongseop Song, and Jaihie Kim I.V. Lab. Dept. of Electrical and Computer Engineering, Yonsei University, 134Shinchon-dong,
More informationResearch Article. ISSN (Print) *Corresponding author Chen Hao
Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 215; 3(6):645-65 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)
More informationVehicle Localization. Hannah Rae Kerner 21 April 2015
Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular
More informationTHE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA
THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA Hwadong Sun 1, Dong Yeop Kim 1 *, Joon Ho Kwon 2, Bong-Seok Kim 1, and Chang-Woo Park 1 1 Intelligent Robotics Research Center,
More informationFree Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images
Free Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images Pietro Cerri and Paolo Grisleri Artificial Vision and Intelligent System Laboratory Dipartimento
More informationThe Safe State: Design Patterns and Degradation Mechanisms for Fail- Operational Systems
The Safe State: Design Patterns and Degradation Mechanisms for Fail- Operational Systems Alexander Much 2015-11-11 Agenda About EB Automotive Motivation Comparison of different architectures Concept for
More informationTypes of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image
More informationOn Performance Evaluation Metrics for Lane Estimation
On Performance Evaluation Metrics for Lane Estimation Ravi Kumar Satzoda and Mohan M. Trivedi Laboratory for Intelligent and Safe Automobiles, University of California San Diego, La Jolla, CA-92093 Email:
More informationHOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY
HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY Sohn, Hong-Gyoo, Yun, Kong-Hyun Yonsei University, Korea Department of Civil Engineering sohn1@yonsei.ac.kr ykh1207@yonsei.ac.kr Yu, Kiyun
More informationSmall-scale objects extraction in digital images
102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications
More information