Snap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform

Size: px
Start display at page:

Download "Snap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform"

Transcription

1 Snap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform Ravi Kumar Satzoda, Sean Lee, Frankie Lu, and Mohan M. Trivedi Abstract In the recent years, mobile computing platforms are becoming increasingly cheaper and yet more powerful in terms of computational resources. Automobiles provide a suitable environment to deploy such mobile platforms in order to provide low cost driver assistance systems. In this paper, we propose Snap-DAS which is a vision-based driver assistance system that is implemented on a Snapdragon TM embedded platform. A forward facing camera combined with the Snapdragon TM platform constitute Snap-DAS. The compute efficient implementation of the LASeR lane estimation algorithm in [1] is exploited to implement a set of lane related functions on Snap-DAS, which include lane drift warning and lane change event detection. A detailed evaluation is performed on live data and Snap-DAS is also field tested on freeways. Furthermore, we explore the possibility of using Snap-DAS for analyzing drives for online naturalistic driving studies. I. INTRODUCTION Mobile computing processors and chipsets have made great advances in the past decade in both computing speed and power consumption [2]. These advances have resulted in the increased usage of embedded electronic systems in modern automobiles. In particular, the use of embedded intelligent driver assistance systems has seen significant popularity [3], [4]. One key requirement in the realization of advanced driver assistance systems (ADAS) is that they must be highly accurate and dependable. Higher accuracy often requires computationally more complex algorithms [5] resulting in higher power consumption. However, embedded processing platforms are resource constrained particularly in areas of battery power and computational speed/frequency [5]. Therefore, the design of the ADAS for embedded platforms requires an understanding of the trade-off between computational performance (speed and power consumption) and accuracy. Among ADAS, vision-based sensing in particular is gaining popularly in recent times [6]. This is because of the ever decreasing costs in camera sensors and the miniaturization of cameras that enables ubiquitous incorporation of the cameras in vehicles [7]. However, vision-based algorithms also involve data intensive processing, which are a challenge to be implemented on power and speed constrained embedded platforms [5]. Most vision based algorithms for driver assistance are usually prototyped on powerful personal computers, and later implemented and translated to embedded hardware systems. This approach can lead to mismatch of the actual 1 All authors are with Laboratory of Safe and Intelligent Vehicles, University of California San Diego, La Jolla, CA rsatzoda@eng.ucsd.edu, yhl014@ucsd.edu, frlu@ucsd.edu, mtrivedi@ucsd.edu. performance of the ADAS. Therefore, there is an increasing need to innovate the vision-based driver assistance systems (DAS) at the algorithmic level such that they are architectureaware for implementation on embedded platforms [2], [8]. In this paper, we present an ADAS platform that is implemented on the Snapdragon TM embedded computing processor [9], which is widely used in mobile platforms such as mobile phones, tablets etc. We call the proposed drive assistance solution Snap-DAS platform, which is designed to assist the driver by issuing warnings during lane drifting and lane changes. The work presented in this paper is an initial work in the direction of developing embedded platforms for ADAS. To the best of our knowledge from available literature (academic and non-academic), this is the first work on implementing ADAS on a Snapdragon TM mobile processor. Snap-DAS is evaluated during real-world field trials and it shows promising possibilities in terms of embedded realization of ADAS. We also explore the possibility of analyzing driving semantics during the drive as part of online naturalistic driving studies [4], [10]. II. SNAP-DAS PLATFORM: HARDWARE SETUP The proposed Snap-DAS platform is primarily an embedded platform for driver assistance systems (DAS) that employs the Snapdragon TM 600 processor. The Snap-DAS platform uses an Inforce 6410 development board whose main processing unit is a Qualcomm Snapdragon TM 600 processor running at 1.7 GHz. Unlike PCs that have large computing processors and high clock speeds of more than 2.5 GHz, Snapdragon TM is a more resource constrained processor in terms of computing capabilities and clock speeds. However, this particular processor is found on many mobile phones and is representative of commercially available consumer product. The Snap-DAS uses a Linaro based flavor of Linux as the operating system, which is optimized for ARM processors such as Snapdragon TM Ṫhe vision algorithms are written in C/C++ and are optimized to run on multiple threads in order to utilize the full computing speed of the processor s four cores. In terms of the sensing modalities, Snap-DAS is fixed with a forward camera (Logitech C920 webcam) that is connected to the development board using the on-board USB ports. Although the camera is capable of capturing video frames up to a resolution of 1920 x 1080, Snap-DAS currently captures an input video with a resolution of in order to provide data capture frame rates of about 17 frames per second (real-time data capture). The entire hardware setup of Snap-DAS is shown in Fig. 1.

2 Fig. 2. Block diagram illustrating LASeR algorithm. Fig. 1. Snap-DAS platform setup: (a) Snapdragon development board, (b) Camera (top of windshield) and Snadragon setup, (c) Overall Snap-DAS platform setup with the camera, processing board and display unit. In terms of the functionality of Snap-DAS, it currently catered for operations that are related to ego-vehicle localization within the lane. It is to be noted that this is the first work in the context of ADAS on Snapdragon TM processors, and work is currently underway to incorporate more functionality on this processor. In the current setup, we particularly look at the forward view from the ego-vehicle and driver assistance is provided with regards to lane detection on input video streams. III. LANE ANALYSIS ON SNAP-DAS Snap-DAS platform is equipped a variety of driver assistance operations that are related to lane analysis from forward view of the ego-vehicle. Lane detection is first applied on the input image, and the results lane positions are used to perform the following different functions: lane drift detection and lane change detection. In order to perform the above functions, we employ the lane estimation algorithm called LASeR (lane analysis using selective regions) that was particularly designed for embedded realization in [1], [2]. Although LASeR was designed as a lane detection algorithm, we use it in different ways to perform the four driver assistance operations listed above in Snap-DAS. A. Lane Estimation on Snap-DAS Before going into the details of the functions of Snap- DAS, the LASeR algorithm [1] is briefly discussed below for the sake of completeness. However, more details about LASeR can be found in [1]. Unlike most existing lane estimation methods which process the entire image (or a region of interest (RoI) below the vanishing line in the image), selected bands are used in LASeR to detect lane features as shown in Fig. 2. An image I is first converted into its inverse perspective mapping (IPM) [11] image I W which provides a top view of the road scene. In this IPM image, N B scan bands, each of height h B pixels, are selected along the vertical y-axis, i.e. along the road from the egovehicle. Each band B i is then convolved with a vertical filter that is derived from steerable filters [1]. The vertical filter is represented by the following equation: G 0 (x,y) = 2x x 2 +y 2 σ 2 e σ 2 (1) where G(x,y) = e x2 +y 2 σ 2 is a 2D Gaussian function. In LASeR G 0 (x,y) is a 5 5 2D filter. Therefore, the filter response B S i for each scan band B i is given by B S i = B i G 0 (x,y). (2) B S i from each band is then analyzed for lane features by thresholding using two thresholds to generate two binary maps E + and E for each band. Vertical projections p + and p are computed using E + and E, which are then used to detect lane features using a shift-and-match operation (SMO) in the following way: K = (p + << δ) p (3) where represents point-wise multiplication, δ is the amount of left shift (denoted by << above) that the vector p undergoes. The SMO allows us to detect adjacent light to dark and dark to light transitions that characterize lane features. Applying a suitable threshold on K and the road model, we get the positions of left and right lanes in the

3 B i -th scan band denoted by P Bi = {P L (x L,y i ),P R (x R,y i )} (4) Therefore, we get N B such positions in N B scan bands (as shown in Fig. 3) resulting in P = {P Bi }, which are associated with each other using a road model, and are also tracked using extended Kalman tracking. More details are explained in [1]. the lane without the turn indicator switched on. Referring to Fig. 4, when the vehicle departs from the lane, the warning system issues a warning as shown in Fig. 4. There are two issues with such warnings. First, the warning is issued after the departure occurs and hence such a warning could be a late warning if there are vehicles in the neighboring lane. Second, lane departures do not mean lane changes, i.e. lane departures are unintentional and the driver often turns the steering wheel to enter into the original ego-lane as depicted in Fig. 4. The warnings in most existing systems do not differentiate between lane change and lane departure. Therefore, they cannot keep track of the vehicle entering into the original ego-lane. Fig. 3. IPM image I W of the input image showing the scan bands. Considering the resource constraints of the Snapdragon processor as described previously in Section II, LASeR is best suited for such platforms because unlike conventional lane estimation algorithms, it processes selected bands of the image only to detect lane features. Additionally, the band based approach also enables scalability of the algorithm, i.e., LASeR can be used to function with a lesser number of scan bands and yet detect the lanes. In Section IV-B, we will demonstrate the trade offs between accuracy and computational times of implementing LASeR on Snapdragon TM processor. Also, the band based approach in LASeR enables parallelism of the lane estimation algorithm on the multiple computing cores of Snapdragon TM processor. This is achieved by processing groups of bands in parallel on the four cores that are available on a Snapdragon TM processor. Fig. 5. Lane drift warning (a) and lane change warning (b) in Snap-DAS. Notice that during drifting/departure in (a) Snap-DAS keeps track of which lane the ego-vehicle is located. Fig. 4. Lane departure warning in conventional ADAS. Notice that the system does not keep track if the vehicle is within the original lane or the next lane. B. Lane Drift and Change Warnings in Snap-DAS A lane departure warning is an integral part of many commercial ADAS systems such as Mobileye [12]. In most of such systems, a warning is issued after the vehicle crosses Snap-DAS issues a warning for lane drift and includes lane departure as part of lane drift warning. Additionally, Snap- DAS detects lane change events separately. This is illustrated in Fig. 5(a) & (b). When a lane departure happens as shown in Fig. 5(a), Snap-DAS issues a warning before the departure for the lane. The warning is issued for lane drifting which is inclusive of the lane departure also. Therefore, when the driver steers the vehicle back into the ego-lane as shown in Fig. 5, the warning is still for lane drifting because the vehicle is not in the middle of the original ego-lane. However, when a lane change occurs, Snap-DAS identifies it as a lane change in addition to lane drifting as shown in Fig. 5(b). In this way, Snap-DAS keeps track of the lane position information also (which is missing the conventional lane departure warning).

4 In order to issue the above two warnings, Snap-DAS employs the LASeR algorithm in a conservative manner. LASeR is first used to detect if the ego-vehicle is drifting. This is performed using the lane drift detection method described in [13]. Given the position of the left and right lane features in the nearest k bands, the x-coordinates of the lane features from LASeR are used in the following way for detecting the lane drifts using the following formulations: respectively. The state transition conditions are indicated on the edges of the state machine shown in Fig. 6. The state transition conditions are dependent on the lane drift estimation formulations that were listed in (5), (6) and (7), i.e. depending on the type of lane drift that is detected, the state transition occurs. { xl event = le ft dri ft if j : LL < x L j < L L + x R j : LR < x R j < L R + { xl event = right dri ft if j : R L < x L j < R + L x R j : R R < x R j < R + R { xl event = in lane if j : R + L < x L j < LL x R j : R + R < x R j < LR (5) (6) (7) where 0 < j < k and LL and L+ L indicate the lower and upper bounds of the x-coordinates of the left drift region with respect to the left lane, LR and L+ R indicate the lower and upper bounds of the x-coordinates of the left drift region with respect to the right lane. The other variables in the above equations denote similar bounds for the right drift with respect to the left and right lanes. The values for the bounds are set based on the sensitivity that is needed for lane drift detection. They are set to 0.5m in our studies. Fig. 7. An example to illustrate the state machine transitions. Fig. 6. State machine that combines left/right drifts to warn during lane change maneuvers. If a lane drift event is detected at time t i based on the above conditions, a lane drift warning (depending on right or left drift) is issued by Snap-DAS. Simultaneously, a state machine is initiated to track the vehicle s maneuvers and determine if the drifting is leading to lane change events. The state machine is shown in Fig. 6. It consists of five states - S IL, S LD, S RD, S LC and S RC corresponding to within lane, left drift, right drift, left lane change and right lane change Let us explain the state machine in more detail using an example shown in Fig. 7. If the vehicle is within the lane (State S IL ) and if drift is not detected (condition no dri ft), then state machine remains in S IL, i.e., in lane state. If a right drift is detected using (6), then the condition r dri f t becomes active and puts the state machine flow in State S RD. If right drift continues to exist, then the state machine remains in the same state S RD. If the vehicle continues to drift right, at a particular time instance t k (shown in Fig. 7) the vehicle will cross into the right lane. In that scenario, the right lane in the previous ego-lane will now become left lane at t k in the new ego-lane (see Fig. 7). In that scenario, a lane change is said to have occurred and the vehicle will now be seen as drifting left. Therefore, the light drift condition becomes active and Snap- DAS goes into right lane change state S RC till no lane drift condition becomes active. Snap-DAS uses the lane drift conditions to accurately position the vehicle such that it detects the lane change events eventually if a lane change event actually occurs. If the vehicle drifts to the right and does not make a lane change, instead it just crosses the lane marking but immediately steers back into its original lane, the state transition from right drift to lane change (referring to the example above) does not occur. Therefore, Snap-DAS continues to issue a lane drift warning. IV. PERFORMANCE EVALUATION In this section, we perform a detailed performance analysis of Snap-DAS. The evaluation is performed at multiple levels. First, the performance of Snapdragon TM processor in capturing data is presented. Then, LASeR s lane detection performance on Snap-DAS in terms of both accuracy and

5 computational time is discussed in detail. Thereafter, we show some sample results of the Snap-DAS s functionality, i.e. lane drift and change warnings, and lane detection results. TABLE I VIDEO CAPTURE RATES USING SNAPDRAGON TM PROCESSOR Camera setup Frames per second (Fps) 1 camera cameras increases by less than 2cm only if the number scan bands is reduced to 4, thereby reducing the amount of computations on Snap-DAS. TABLE III COMPUTATIONAL TIMES OF LASER ON SNAP-DAS Number of scan bands µ LPD cm σ LPD cm 4 bands bands bands A. Snapdragon TM Processor Performance The Snap-DAS platform involves one camera connected to the development board. We implemented a simple program to capture data using the Snap-DAS system. In addition to the single camera setup that discussed in Section II, we also implemented a capture system using two cameras. This exercise helped to find the timing limits that one should work with if multiple cameras are used. Table I shows the timing in terms of frames per second for the two different configurations of camera setup. In both cases the video resolution is It can be seen that one camera setup achieves near real-time performance in terms of visual data capture. However, adding another camera reduces the frame rate by 25%. B. LASeR on Snap-DAS It was shown in Section III-A that LASeR offers options for scalability in terms of the number of bands that can be chosen to detect lane features. Lower number of scan bands results in lesser computations, which is more suitable for implementation on Snapdragon TM processor. However, lowering the number of scan bands also affects the accuracy. Table II lists the time per frame in milliseconds that is required by LASeR to detect lanes on Snapdragon TM processor. Three different configurations of the LASeR algorithm are listed by varying the number of scan bands from 16 to 8 to 4. It can be seen that there is 2 ms advantage in timing between 16-band LASeR and 4-band LASeR. TABLE II COMPUTATIONAL TIMES OF LASER ON SNAP-DAS Number of scan bands Time per frame (ms) 16 bands bands bands We will now see the accuracy evaluation of LASeR on Snap-DAS for different configurations of LASeR algorithm. The lane position deviation (LPD) metric [14] is used to evaluate the lanes detected by LASeR using three different conditions for the number of scan bands. Table III lists mean and standard deviation for the LPD in centimeters. It can be seen from Table III that LASeR performs most accurately with minimum mean LPD with 16 scan bands looking out for lane features. This is expected because LASeR is more accurate with more number of scan bands. However, the error C. Snap-DAS Warnings Snap-DAS is designed to issue a variety of warnings related to ego-vehicle maneuvers within and across lanes. The list of icons are shown in Fig. 8which are related to ego-vehicle maneuvers, i.e. lane drifts and lane changes. Fig. 8. Set of warnings that Snap-DAS issues. Snap-DAS was connected to the on-board camera as described in Section II and a display monitor was connected to Snap-DAS that showed the warnings/information icons on the input video stream. In Fig. 9, warnings due to egovehicle maneuvers by Snap-DAS are shown. A right lane drift warning is indicated in Fig. 9(a) and lane change warning is issued for the ego-vehicle maneuver in Fig. 9(b). We briefly compare the functionality of Snap-DAS with one of the commercial vision-based ADAS. Mobileye TM 560 [12] is used as the reference commercial system to compare. Mobileye provides a range of functions that include lane departure, vehicle detection in ego-lane with headway monitoring, pedestrian detection etc. Snap-DAS is currently designed for a set of functions that are related to lanes in front of the ego-vehicle. Snap-DAS detects both lane drifts (that include lane departures also) and lane change events, which is not directly seen in the case with Mobileye. V. SNAP-DAS FOR NATURALISTIC DRIVING STUDIES Given the mobility of the Snap-DAS platform, it can be used for drive analysis of drivers in naturalistic driving studies (NDS) [4]. The operations of Snap-DAS can be used to create a log of the different events such as the number of lane changes, lane drift events etc. These events can then be used to create a drive analysis report which summarizes the semantics about the drive. Drive analysis report generation for naturalistic driving studies is explored for offline data analysis of pre-recorded naturalistic driving data in detail in [4]. However, the same can be generated for online data using Snap-DAS mobile platform while driving. This is an additional functionality that can be added as the final logging

6 Fig. 9. Lane drift (a) and lane change (b) being detected during the drive by Snap-DAS. operation in Snap-DAS. Furthermore, applications could be developed that generate some measures about the driving styles, driver behaviors etc. based on the drive analysis report. We introduce this as part of this paper to indicate future possibilities that can be explored using the mobile Snap-DAS platform. [2], Vision-based Lane Analysis : Exploration of Issues and Approaches for Embedded Realization, in 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops on Embedded Vision, 2013, pp [3] B. M. Wilamowski, Recent advances in in-vehicle embedded systems, IECON th Annual Conference of the IEEE Industrial Electronics Society, pp , Nov [4] R. Satzoda and M. Trivedi, Drive analysis using vehicle dynamics and vision-based lane semantics, Intelligent Transportation Systems, IEEE Transactions on, vol. 16, no. 1, pp. 9 18, Feb [5] F. Stein, The challenge of putting vision algorithms into a car, 2012 IEEE CVPR Workshops, pp , June [6] H. Liu, S. Chen, and N. Kubota, Intelligent Video Systems and Analytics : A Survey, IEEE Transactions on Industrial Informatics, vol. 9, no. 3, pp , [7] A. Doshi, B. T. Morris, and M. M. Trivedi, On-road prediction of driver s intent with multimodal sensory cues, IEEE Pervasive Computing, vol. 10, no. 3, pp , [8] R. K. Satzoda and M. M. Trivedi, On enhancing lane estimation using contextual cues, IEEE Transactions on Circuits and Systems for Video Technology, vol. 99, [9] Snapdragon Processors, howpublished = [10] R. K. Satzoda, S. Martin, M. V. Ly, P. Gunaratne, and M. M. Trivedi, Towards Automated Drive Analysis : A Multimodal Synergistic Approach, in IEEE Intelligent Transportation Systems Conference, 2013, p. To appear. [11] A. Broggi, Parallel and local feature extraction: a real-time approach to road boundary detection. IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, vol. 4, no. 2, pp , Jan [12] Mobileye, howpublished = [13] R. K. Satzoda, P. Gunaratne, and M. M. Trivedi, Drive Analysis using Lane Semantics for Data Reduction in Naturalistic Driving Studies, in 2014 IEEE Intelligent Vehicles Symposium (IV), no. Iv, 2014, pp [14] R. K. Satzoda and M. M. Trivedi, On Performance Evaluation Metrics for Lane Estimation, in 22nd International Conference on Pattern Recognition, 2014, pp VI. CONCLUDING REMARKS In this paper, we presented the mobile platform for visionbased driver assistance called Snap-DAS. The hardware setup, the underlying algorithms to provide a set of functions and the evaluation on live driving trials was elaborated. Snap-DAS opens a new set of possibilities in the area of mobile platforms for driver assistance. This paper primarily focuses on the operations related to lanes using a single forward looking camera in the ego-vehicle. Snap-DAS could be explored further to include more operations related to multiple perspectives and multiple objects. However, addition of more functions also implies a higher computational load. The Snapdragon TM processor must be carefully investigated at a processor level to include higher computational requirements. Finally, the mobility offered by Snap-DAS could also be explored to analyze drivers and driving styles as part of naturalistic driving studies to develop measures that can aid in developing safety systems. REFERENCES [1] R. K. Satzoda and M. M. Trivedi, Selective Salient Feature based Lane Analysis, in 2013 IEEE Intelligent Transportation Systems Conference, 2013, pp

On Performance Evaluation Metrics for Lane Estimation

On Performance Evaluation Metrics for Lane Estimation On Performance Evaluation Metrics for Lane Estimation Ravi Kumar Satzoda and Mohan M. Trivedi Laboratory for Intelligent and Safe Automobiles, University of California San Diego, La Jolla, CA-92093 Email:

More information

Vision-based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization

Vision-based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Vision-based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization R. K. Satzoda and Mohan M. Trivedi Computer

More information

Chapter 10 Vision-Based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization

Chapter 10 Vision-Based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization Chapter 10 Vision-Based Lane Analysis: Exploration of Issues and Approaches for Embedded Realization Ravi Kumar Satzoda and Mohan M. Trivedi Abstract Vision-based lane analysis has been investigated to

More information

Towards Automated Drive Analysis: A Multimodal Synergistic Approach

Towards Automated Drive Analysis: A Multimodal Synergistic Approach Proceedings of the 6th International IEEE Annual Conference on Intelligent Transportation Systems (ITSC 23), The Hague, The Netherlands, October 6-9, 23 WeB.2 Towards Automated Drive Analysis: A Multimodal

More information

Embedded Computing Framework for Vision-based Real-time Surround Threat Analysis and Driver Assistance

Embedded Computing Framework for Vision-based Real-time Surround Threat Analysis and Driver Assistance Embedded Computing Framework for Vision-based Real-time Surround Threat Analysis and Driver Assistance Frankie Lu Sean Lee Ravi Kumar Satzoda Mohan Trivedi University of California, San Diego [frlu,yhl014,rsatzoda,mtrivedi]@eng.ucsd.edu

More information

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output

More information

Real-Time Lane Departure and Front Collision Warning System on an FPGA

Real-Time Lane Departure and Front Collision Warning System on an FPGA Real-Time Lane Departure and Front Collision Warning System on an FPGA Jin Zhao, Bingqian ie and inming Huang Department of Electrical and Computer Engineering Worcester Polytechnic Institute, Worcester,

More information

IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, VOL. 1, NO. 4, DECEMBER

IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, VOL. 1, NO. 4, DECEMBER IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, VOL. 1, NO. 4, DECEMBER 2016 335 Vision-Based Front and Rear Surround Understanding Using Embedded Processors Ravi Kumar Satzoda, Member, IEEE, Sean Lee, Frankie

More information

Preceding vehicle detection and distance estimation. lane change, warning system.

Preceding vehicle detection and distance estimation. lane change, warning system. Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute

More information

A Road Marking Extraction Method Using GPGPU

A Road Marking Extraction Method Using GPGPU , pp.46-54 http://dx.doi.org/10.14257/astl.2014.50.08 A Road Marking Extraction Method Using GPGPU Dajun Ding 1, Jongsu Yoo 1, Jekyo Jung 1, Kwon Soon 1 1 Daegu Gyeongbuk Institute of Science and Technology,

More information

Detection & Classification of Arrow Markings on Roads using Signed Edge Signatures

Detection & Classification of Arrow Markings on Roads using Signed Edge Signatures 2012 Intelligent Vehicles Symposium Alcalá de Henares, Spain, June 3-7, 2012 Detection & Classification of Arrow Markings on Roads using Signed Edge Signatures S. Suchitra, R. K. Satzoda and T. Srikanthan

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Stereo Vision Based Advanced Driver Assistance System

Stereo Vision Based Advanced Driver Assistance System Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Vehicle Occupant Posture Analysis Using Voxel Data

Vehicle Occupant Posture Analysis Using Voxel Data Ninth World Congress on Intelligent Transport Systems, Chicago, Illinois, October Vehicle Occupant Posture Analysis Using Voxel Data Ivana Mikic, Mohan Trivedi Computer Vision and Robotics Research Laboratory

More information

Vision-based Frontal Vehicle Detection and Tracking

Vision-based Frontal Vehicle Detection and Tracking Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga,

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY 1 K. Sravanthi, 2 Mrs. Ch. Padmashree 1 P.G. Scholar, 2 Assistant Professor AL Ameer College of Engineering ABSTRACT In Malaysia, the rate of fatality due

More information

XIV International PhD Workshop OWD 2012, October Optimal structure of face detection algorithm using GPU architecture

XIV International PhD Workshop OWD 2012, October Optimal structure of face detection algorithm using GPU architecture XIV International PhD Workshop OWD 2012, 20 23 October 2012 Optimal structure of face detection algorithm using GPU architecture Dmitry Pertsau, Belarusian State University of Informatics and Radioelectronics

More information

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi

More information

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues SHRP 2 Safety Research Symposium July 27, 2007 Site-Based Video System Design and Development: Research Plans and Issues S09 Objectives Support SHRP2 program research questions: Establish crash surrogates

More information

Future Implications for the Vehicle When Considering the Internet of Things (IoT)

Future Implications for the Vehicle When Considering the Internet of Things (IoT) Future Implications for the Vehicle When Considering the Internet of Things (IoT) FTF-AUT-F0082 Richard Soja Automotive MCU Systems Engineer A P R. 2 0 1 4 TM External Use Agenda Overview of Existing Automotive

More information

Vehicle Detection Using Gabor Filter

Vehicle Detection Using Gabor Filter Vehicle Detection Using Gabor Filter B.Sahayapriya 1, S.Sivakumar 2 Electronics and Communication engineering, SSIET, Coimbatore, Tamilnadu, India 1, 2 ABSTACT -On road vehicle detection is the main problem

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Monitoring Head Dynamics for Driver Assistance Systems: A Multi-Perspective Approach

Monitoring Head Dynamics for Driver Assistance Systems: A Multi-Perspective Approach Proceedings of the 16th International IEEE Annual Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, October 6-9, 2013 WeC4.5 Monitoring Head Dynamics for Driver

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System Sept. 8-10, 010, Kosice, Slovakia Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System Martin FIFIK 1, Ján TURÁN 1, Ľuboš OVSENÍK 1 1 Department of Electronics and

More information

ITS (Intelligent Transportation Systems) Solutions

ITS (Intelligent Transportation Systems) Solutions Special Issue Advanced Technologies and Solutions toward Ubiquitous Network Society ITS (Intelligent Transportation Systems) Solutions By Makoto MAEKAWA* Worldwide ITS goals for safety and environment

More information

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation February 26, 2013 ESRA Fed. GIS Outline: Big picture: Positioning and

More information

On-road obstacle detection system for driver assistance

On-road obstacle detection system for driver assistance Asia Pacific Journal of Engineering Science and Technology 3 (1) (2017) 16-21 Asia Pacific Journal of Engineering Science and Technology journal homepage: www.apjest.com Full length article On-road obstacle

More information

Adaptive Background Mixture Models for Real-Time Tracking

Adaptive Background Mixture Models for Real-Time Tracking Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance

More information

Lane Markers Detection based on Consecutive Threshold Segmentation

Lane Markers Detection based on Consecutive Threshold Segmentation ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 6, No. 3, 2011, pp. 207-212 Lane Markers Detection based on Consecutive Threshold Segmentation Huan Wang +, Mingwu Ren,Sulin

More information

Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2

Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2 Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2 1 Elektrobit Automotive GmbH, Am Wolfsmantel 46, 91058 Erlangen, Germany {AndreGuilherme.Linarth, Alexander.Doebert}@elektrobit.com

More information

Dynamic Panoramic Surround Map: Motivation and Omni Video Based Approach

Dynamic Panoramic Surround Map: Motivation and Omni Video Based Approach Dynamic Panoramic Surround Map: Motivation and Omni Video Based Approach Tarak Gandhi and Mohan M. Trivedi Computer Vision and Robotics Research Laboratory University of California San Diego La Jolla,

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention

Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Sumit Jha and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The

More information

Go with the flow: Improving Multi-View Vehicle Detection with Motion Cues

Go with the flow: Improving Multi-View Vehicle Detection with Motion Cues IEEE International Conference on Pattern Recognition 2014 Go with the flow: Improving Multi-View Vehicle Detection with Motion Cues Alfredo Ramirez, Eshed Ohn-Bar, and Mohan M. Trivedi LISA: Laboratory

More information

Automated Driving Development

Automated Driving Development Automated Driving Development with MATLAB and Simulink MANOHAR REDDY M 2015 The MathWorks, Inc. 1 Using Model-Based Design to develop high quality and reliable Active Safety & Automated Driving Systems

More information

A Location-based Directional Route Discovery (LDRD) Protocol in Mobile Ad-hoc Networks

A Location-based Directional Route Discovery (LDRD) Protocol in Mobile Ad-hoc Networks A Location-based Directional Route Discovery (LDRD) Protocol in Mobile Ad-hoc Networks Stephen S. Yau, Wei Gao, and Dazhi Huang Dept. of Computer Science and Engineering Arizona State University Tempe,

More information

Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps

Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,

More information

Moving Object Counting in Video Signals

Moving Object Counting in Video Signals Moving Object Counting in Video Signals Ganesh Raghtate 1, Abhilasha K Tiwari 1 1 Scholar, RTMNU, Nagpur, India E-mail- gsraghate@rediffmail.com Abstract Object detection and tracking is important in the

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

Chapter-4. Simulation Design and Implementation

Chapter-4. Simulation Design and Implementation Chapter-4 Simulation Design and Implementation In this chapter, the design parameters of system and the various metrics measured for performance evaluation of the routing protocols are presented. An overview

More information

WITHIN the last few years, research into intelligent vehicles

WITHIN the last few years, research into intelligent vehicles 20 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 7, NO. 1, MARCH 2006 Video-Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation Joel C. McCall and Mohan

More information

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT

APPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate

More information

Advanced Driver Assistance: Modular Image Sensor Concept

Advanced Driver Assistance: Modular Image Sensor Concept Vision Advanced Driver Assistance: Modular Image Sensor Concept Supplying value. Integrated Passive and Active Safety Systems Active Safety Passive Safety Scope Reduction of accident probability Get ready

More information

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm. Volume 7, Issue 5, May 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Hand Gestures Recognition

More information

An Image Based Approach to Compute Object Distance

An Image Based Approach to Compute Object Distance An Image Based Approach to Compute Object Distance Ashfaqur Rahman * Department of Computer Science, American International University Bangladesh Dhaka 1213, Bangladesh Abdus Salam, Mahfuzul Islam, and

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

TxDOT Video Analytics System User Manual

TxDOT Video Analytics System User Manual TxDOT Video Analytics System User Manual Product 0-6432-P1 Published: August 2012 1 TxDOT VA System User Manual List of Figures... 3 1 System Overview... 4 1.1 System Structure Overview... 4 1.2 System

More information

Video Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation

Video Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation SUBMITTED FOR REVIEW: IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, DECEMBER 2004, REVISED JULY 2005 1 Video Based Lane Estimation and Tracking for Driver Assistance: Survey, System, and Evaluation

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions

More information

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,

More information

Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors

Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors American Journal of Engineering and Applied Sciences Original Research Paper Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors 1 Rami Nahas and 2 S.P. Kozaitis 1 Electrical Engineering,

More information

Face Detection using Hierarchical SVM

Face Detection using Hierarchical SVM Face Detection using Hierarchical SVM ECE 795 Pattern Recognition Christos Kyrkou Fall Semester 2010 1. Introduction Face detection in video is the process of detecting and classifying small images extracted

More information

Detection and Classification of Painted Road Objects for Intersection Assistance Applications

Detection and Classification of Painted Road Objects for Intersection Assistance Applications Detection and Classification of Painted Road Objects for Intersection Assistance Applications Radu Danescu, Sergiu Nedevschi, Member, IEEE Abstract For a Driving Assistance System dedicated to intersection

More information

Land & Lee (1994) Where do we look when we steer

Land & Lee (1994) Where do we look when we steer Automobile Steering Land & Lee (1994) Where do we look when we steer Eye movements of three subjects while driving a narrow dirt road with tortuous curves around Edinburgh Scotland. Geometry demanded almost

More information

arxiv: v3 [cs.cv] 20 Mar 2018

arxiv: v3 [cs.cv] 20 Mar 2018 A Multimodal, Full-Surround Vehicular Testbed for Naturalistic Studies and Benchmarking: Design, Calibration and Deployment Akshay Rangesh 1, Kevan Yuen 1, Ravi Kumar Satzoda 1, Rakesh Nattoji Rajaram

More information

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES 1 R. AROKIA PRIYA, 2 POONAM GUJRATHI Assistant Professor, Department of Electronics and Telecommunication, D.Y.Patil College of Engineering, Akrudi,

More information

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily

More information

Embedded ECG Based Real Time Monitoring and Control of Driver Drowsiness Condition

Embedded ECG Based Real Time Monitoring and Control of Driver Drowsiness Condition International Journal of Science, Technology and Society 2015; 3(4): 146-150 Published online June 15, 2015 (http://www.sciencepublishinggroup.com/j/ijsts) doi: 10.11648/j.ijsts.20150304.17 ISSN: 2330-7412

More information

Low Cost Motion Capture

Low Cost Motion Capture Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

Fast Distance Transform Computation using Dual Scan Line Propagation

Fast Distance Transform Computation using Dual Scan Line Propagation Fast Distance Transform Computation using Dual Scan Line Propagation Fatih Porikli Tekin Kocak Mitsubishi Electric Research Laboratories, Cambridge, USA ABSTRACT We present two fast algorithms that approximate

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

DSP-Based Parallel Processing Model of Image Rotation

DSP-Based Parallel Processing Model of Image Rotation Available online at www.sciencedirect.com Procedia Engineering 5 (20) 2222 2228 Advanced in Control Engineering and Information Science DSP-Based Parallel Processing Model of Image Rotation ZHANG Shuang,2a,2b,

More information

parco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia

parco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearbon (MI), USA October 3-5, 2000 Stereo Vision-based Vehicle Detection M. Bertozzi 1 A. Broggi 2 A. Fascioli 1 S. Nichele 2 1 Dipartimento

More information

Towards Semantic Understanding of Surrounding Vehicular Maneuvers: A Panoramic Vision-Based Framework for Real-World Highway Studies

Towards Semantic Understanding of Surrounding Vehicular Maneuvers: A Panoramic Vision-Based Framework for Real-World Highway Studies Towards Semantic Understanding of Surrounding Vehicular Maneuvers: A Panoramic Vision-Based Framework for Real-World Highway Studies Miklas S. Kristoffersen 1,2, Jacob V. Dueholm 1,2, Ravi K. Satzoda 2,

More information

Fatigue Detection to Prevent Accidents

Fatigue Detection to Prevent Accidents Fatigue Detection to Prevent Accidents Vishwanath Burkpalli* 1, Karishma Illal 2, Soumya Keely 3, Sushmita Solshe 4 1, 2, 3,4P.D.A College 0f Engineering College, Kalaburgi 585102, India. 1 vishwa_bc@rediffmail.com

More information

VISUAL NAVIGATION SYSTEM ON WINDSHIELD HEAD-UP DISPLAY

VISUAL NAVIGATION SYSTEM ON WINDSHIELD HEAD-UP DISPLAY VISUAL NAVIGATION SYSTEM ON WINDSHIELD HEAD-UP DISPLAY Akihiko SATO *, Itaru KITAHARA, Yoshinari KAMEDA, Yuichi OHTA Department of Intelligent Interaction Technologies, Graduate School of Systems and Information

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Design of a Dynamic Data-Driven System for Multispectral Video Processing

Design of a Dynamic Data-Driven System for Multispectral Video Processing Design of a Dynamic Data-Driven System for Multispectral Video Processing Shuvra S. Bhattacharyya University of Maryland at College Park ssb@umd.edu With contributions from H. Li, K. Sudusinghe, Y. Liu,

More information

State-Based Synchronization Protocol in Sensor Networks

State-Based Synchronization Protocol in Sensor Networks State-Based Synchronization Protocol in Sensor Networks Shang-Chih Hsu Wei Yen 1 1 Department of Computer Science and Engineering, Tatung University, Taipei, Taiwan, ROC shanzihsu@yahoo.com.tw, wyen@ttu.edu.tw

More information

REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU

REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU High-Performance Сomputing REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU P.Y. Yakimov Samara National Research University, Samara, Russia Abstract. This article shows an effective implementation of

More information

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking

Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking Mohammad Baji, Dr. I. SantiPrabha 2 M. Tech scholar, Department of E.C.E,U.C.E.K,Jawaharlal Nehru Technological

More information

An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve

An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve Replicability of Cluster Assignments for Mapping Application Fouad Khan Central European University-Environmental

More information

Pedestrian counting in video sequences using optical flow clustering

Pedestrian counting in video sequences using optical flow clustering Pedestrian counting in video sequences using optical flow clustering SHIZUKA FUJISAWA, GO HASEGAWA, YOSHIAKI TANIGUCHI, HIROTAKA NAKANO Graduate School of Information Science and Technology Osaka University

More information

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING Dieison Silveira, Guilherme Povala,

More information

A Robust Wipe Detection Algorithm

A Robust Wipe Detection Algorithm A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,

More information

Domain Adaptation For Mobile Robot Navigation

Domain Adaptation For Mobile Robot Navigation Domain Adaptation For Mobile Robot Navigation David M. Bradley, J. Andrew Bagnell Robotics Institute Carnegie Mellon University Pittsburgh, 15217 dbradley, dbagnell@rec.ri.cmu.edu 1 Introduction An important

More information

Detecting and recognizing centerlines as parabolic sections of the steerable filter response

Detecting and recognizing centerlines as parabolic sections of the steerable filter response Detecting and recognizing centerlines as parabolic sections of the steerable filter response Petar Palašek, Petra Bosilj, Siniša Šegvić Faculty of Electrical Engineering and Computing Unska 3, 10000 Zagreb

More information

A Vision System for Monitoring Intermodal Freight Trains

A Vision System for Monitoring Intermodal Freight Trains A Vision System for Monitoring Intermodal Freight Trains Avinash Kumar, Narendra Ahuja, John M Hart Dept. of Electrical and Computer Engineering University of Illinois,Urbana-Champaign Urbana, Illinois

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

A CORDIC Algorithm with Improved Rotation Strategy for Embedded Applications

A CORDIC Algorithm with Improved Rotation Strategy for Embedded Applications A CORDIC Algorithm with Improved Rotation Strategy for Embedded Applications Kui-Ting Chen Research Center of Information, Production and Systems, Waseda University, Fukuoka, Japan Email: nore@aoni.waseda.jp

More information

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song VIDEO STABILIZATION WITH L-L2 OPTIMIZATION Hui Qu, Li Song Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University ABSTRACT Digital videos often suffer from undesirable

More information

On Driver Gaze Estimation: Explorations and Fusion of Geometric and Data Driven Approaches

On Driver Gaze Estimation: Explorations and Fusion of Geometric and Data Driven Approaches 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC) Windsor Oceanico Hotel, Rio de Janeiro, Brazil, November 1-4, 2016 On Driver Gaze Estimation: Explorations and Fusion

More information

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance *

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * Xinguo Yu, Wu Song, Jun Cheng, Bo Qiu, and Bin He National Engineering Research Center for E-Learning, Central China Normal

More information

Modulation-Aware Energy Balancing in Hierarchical Wireless Sensor Networks 1

Modulation-Aware Energy Balancing in Hierarchical Wireless Sensor Networks 1 Modulation-Aware Energy Balancing in Hierarchical Wireless Sensor Networks 1 Maryam Soltan, Inkwon Hwang, Massoud Pedram Dept. of Electrical Engineering University of Southern California Los Angeles, CA

More information

On A Traffic Control Problem Using Cut-Set of Graph

On A Traffic Control Problem Using Cut-Set of Graph 1240 On A Traffic Control Problem Using Cut-Set of Graph Niky Baruah Department of Mathematics, Dibrugarh University, Dibrugarh : 786004, Assam, India E-mail : niky_baruah@yahoo.com Arun Kumar Baruah Department

More information

Determination of Vehicle Following Distance using Inverse Perspective Mapping. A. R. Mondal 1, I. R. Ahmed 2

Determination of Vehicle Following Distance using Inverse Perspective Mapping. A. R. Mondal 1, I. R. Ahmed 2 Paper ID: TE-015 698 International Conference on Recent Innovation in Civil Engineering for Sustainable Development () Department of Civil Engineering DUET - Gazipur, Bangladesh Determination of Vehicle

More information