On Driver Gaze Estimation: Explorations and Fusion of Geometric and Data Driven Approaches
|
|
- Lizbeth Hart
- 5 years ago
- Views:
Transcription
1 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC) Windsor Oceanico Hotel, Rio de Janeiro, Brazil, November 1-4, 2016 On Driver Gaze Estimation: Explorations and Fusion of Geometric and Data Driven Approaches Borhan Vasli, Sujitha Martin, and Mohan Manubhai Trivedi Abstract Gaze direction is important in a number of applications such as active safety and driver s activity monitoring. However, there are challenges in estimating gaze robustly in real world driving situations. While performance of personalized gaze estimation models has improved significantly, performance improvement of universal gaze estimation is lagging behind; one reason being, learning based methods do not exploit the physical constraints of the car. In this paper, we propose a system to estimate driver s gaze from head and eye cues projected on a multi-plane geometrical environment and a system which fuses the geometric with data driven learning method. Evaluations are conducted on naturalistic driving data containing different drivers in different vehicles in order to test the generalization of the methods. Systematic evaluations on this data set are presented for the proposed geometric based gaze estimation method and geometric plus learning based hybrid gaze estimation framework, where exploiting the geometrical constraints of the car shows promising results of generalization. Index Terms In Cabin Activity Analysis, Human-vehicle Interaction, Gaze Estimation, Take-over, Highly Automated Vehicles. I. INTRODUCTION In 2013, on average 8 people were killed and 1,161 were injured everyday in the United States due to car accidents involving distracted drivers [1]. Distracted driving means that the driver is driving while doing another activity taking his/her attention away from driving. Distracted driving increases the chance of motor vehicle accidents. There are three major types of distraction while driving [2]: visual (eyes of the road), manual (hands of the wheel) and cognitive (mind off of driving). Early knowledge of driver behavior, in concert with the vehicle and the environment (e.g. surrounding vehicles, pedestrians) can help to recognize and prevent dangerous situations. Driver gaze estimation is one of the key components for estimating and representing driver behavior, as seen in current research developments for driver assistance systems as well as for highly automated vehicles. In [4], Ohn-Bar et al. explored early prediction of maneuvers such as overtake and brake, where driver related cues (e.g. head, eyes) showed earlier and stronger predictive importance compared to surround and vehicle cues. On the other hand, Li et al. [6] explored the predictive importance of driver s gaze for maneuver and secondary task detection; they exploited the findings that the duration and frequency of mirrorchecking actions differed among maneuvers, secondary task performance and baseline/normal driving. Furthermore, gaze The authors are with the Laboratory for Intelligent and Safe Automobiles (LISA), University of California San Diego, La Jolla, CA 92092, USA {bvasli, scmartin, mtrivedi}@ucsd.edu Fig. 1: Gaze zones and multi-plane environment. 1: Windshield, 2: Right Mirror, 3: Left Mirror, 4: Infotainment Panel, 5: Rearview Mirror, and 6: Speedometer. behavior has been studied in the context of how long it takes to get the driver back into the loop when engaged in non-driving secondary task with automation in a dynamic driving simulator [7]. Therefore, estimating driver gaze and understanding gaze behavior is of increasing importance in the advancement of driver assistance and highly automated vehicles. Vision-based gaze estimation is especially desired for its non-contact, non-intrusive nature. In literature, vision based estimation works have diverged at the point of universal versus personalized models. In fact, recent works have shown impressive performance of personalized gaze estimation using machine learning approaches [8] [9], while the performance of universal gaze estimation using machine learning approaches is far behind [8]. One of the disadvantages of existing learning based system is the lack of exploiting physical constraints of the car (e.g. location and relative distance between gaze zone). The question then is, can personalized systems be more generalized, with minimal effects on performance, by leveraging the geometrical constraints of the car? This study introduces a geometric based gaze estimation method and a fusion with learning based method to raise the performance bar of universal based gaze estimation. II. RELATED WORKS In literature, vision based estimation works fall into one of two categories: learning based method or geometric methods. The work presented in [3] estimates gaze zones based on geometric methods where a 3-D car model is divided into /16/$ IEEE 655
2 different zones and 3-D gaze tracking is used to classify into gaze zones; however, no evaluations on gaze zone level is given. Another geometric based method, based on an earlier work [13], is presented in [5], where the number of gaze zones estimated is very limited (i.e. on-road versus off-road) and evaluations are conducted in stationary vehicles. In terms of learning based methods, there are two prevalent works. Tawari et al., in two separate studies, studied the importance of head pose, head dynamics and eye cues. One of the distinguishing contributions of their work is in the design of the features to represent observable driver cues in order to robustly estimate driver s gaze: one is the dynamic representation of the driver s head [12] and another is the representation of horizontal and vertical eye gaze surrogates [9]; evaluations in both studies were conducted with naturalistic driving data. Another learning based method is the work presented by Fridman et al. [8] [14] where the evaluations are commendably done on a large dataset but the design of the features to represent the state of the head and eyes is what is causing their classifier to overfit to user based models and causing a sharp decrease in performance for global based models. Our proposed method employs a geometric based method to classify gaze into size gaze zones, which is illustrated in Fig. 1. Furthermore, we compare the performance with the learning based method proposed in [9] and show that a hybrid of geometric and learning based methods gives better performance. III. SYSTEM DESCRIPTION The building block of the geometrical gaze estimation method and the existing learning based method is shown in Fig. 2. This section describes the main components of our work which consists of the following steps: low level features, multi-plane geometric gaze estimation, learning based gaze estimation and hybrid gaze zone estimation. The following section goes in more depth how each of the above components were implemented. A. Low-Level Features The gaze estimation system requires facial landmarks. For example position of pupil in image plane and head pose (i.e. pitch, yaw and roll). For each frame, first a face is detected [10], then facial landmarks [11] and finally iris locations [9]. Fig. 3 illustrates the landmarks for a sample frame. Moreover, head pose is calculated from landmarks such as eye corners, nose corners and nose tip as shown by yellow dots in Fig. 3. From the tracked landmarks and their relative 3-D configurations, a weak perspective projection model, POS (Pose From Orthography and Scaling), determines the rotation matrix and corresponding yaw, pitch and roll angles of the head pose [12]. To compute the eye gaze with respect to the head, the proposed system requires the 3-D model of eye to get the relative position of eye contour and ultimately the pupil in 3-D. We made a few assumptions in the process of the eye modeling based on the physical and biological structure of the eye. First assumption was that the eyeball is spherical and has constant radius across different people. The second was that the eyes need to be open and pupils visible in order to estimate the gaze vector. Fig. 3: Facial Landmark and Head Pose. Total of 51 landmark estimated for the face and 6 of them marked with yellow used for estimating the head pose Fig. 4a illustrates the eye contour e1, e2,..., e6 on the 3- D eye model. Since the 3-D eye model is used to find the relative position of the pupil with respect to the center of the eyeball P in Fig. 4b, the exact position of eye contour is not crucial. By setting the eye contour once as in Fig. 4a and we find the transformation matrix to map the 3-D points to the 2-D points in image plane. So for each frame, the eye contour in world coordinate can be generated from the eye landmarks in the image plane and the inverse transform matrix. Finally the 3-D position of the pupil can be estimated by using the barycentric coordinate transformation to map the pupil from image plane to world coordinate [6]. The advantage of this transformation is that it preserves the relative distance of pupil to each corner, therefore the relative position of the pupil in 3-D will be consistent with each image. Fig. 4b illustrates the result. (a) Fig. 4: 4a: the 3-D eye model with corresponding eye contour. 4a: 2D to 3-D transformation of the pupil using the barycoordinate transformation of the pupil. barycoordinate maps the pupil in such a way that preserves its relative distance each eye corner B. Multi-plane geometric method The ultimate goal for gaze zone estimation is to classify the projection (intersection) of gaze vector onto multi-plane framework. The model uses Unity, which is a cross-platform game engine, to generate a generic car model in the world coordinate [15]. For consistency, the origin of world s coordinate is set to be the center of the driver s head. The planes are defined manually for desired regions and also scaled to real ratios of specific car [16]. Fig. 6 shows the (b) 656
3 Removed Biased Head Pose Car Constraint Head Pose & Landmarks Define Planes Gaze Vector Projecton of Gaze on planes SVM Gaze Zone Head Pose & Landmarks Horizental Gaze Angle Vertical Gaze Angle SVM Gaze Zone Fig. 2: Geometric based method s block diagram on the left and learning based method s block diagram on the right planes in 3-D generic car model. For this paper, we define 4 planes which represent windshield, right mirror, left mirror and infotainment panel. This work can be extended to include rearview mirror and speedometer plane. Plane: Π θ r gaze r ref L pupil Z P Y Eye Fig. 5: This figure shows the 3-D gaze estimation model. Gaze vector is connecting the center of the eyeball P and 3- D position of the pupil. θ is the relative angle of windshield plane with respect to the driver and Lis the distance of driver from the windshield. These parameter can be adjust depends on the car and the user. The gaze vector is defined as a vector that shows where the driver is looking with respect to world coordinate. Final gaze vector consist of rotating the reference gaze vector ( r ref ) based on three factors: head pose, eye pose and removal of bias. Gaze vector in general is set to be the vector connecting the origin of world s coordinate i.e. center of eyeball, to the pupil in world s coordinate as described in Fig. 5. r ref is the gaze vector when the driver is facing forward and looking forward, and normal distance from the windshield plane. In general, the rotation matrix is based on three angels Φ x, Φ y and Φ z,which determines the rotation along each axis: Rot( ) = R z (Φ z )R y (Φ y )R x (Φ x ) (1) Fig. 6: 3-D model of car and planes in unity environment. Planes are defined manually and their equations are used to find the intersection points. where, R x (Φ x ) = cos(φ x ) sin(φ x ) (2) 0 sin(φ x ) cos(φ x ) cos(φ x ) 0 sin(φ x ) R y (Φ y ) = (3) sin(φ x 0 cos(φ x ) cos(φ z ) sin(φ z ) 0 R z (Φ z ) = sin(φ x ) cos(φ z ) 0 (4) r final = R T Bias R Eye R P ose r ref (5) As mentioned before, pose = [Φ x Φ y Φ z ] is the yaw, pitch, and roll angels. Having the pose values one can find the rotation matrix, R pose, which rotate r ref relative to the camera. Similarly, R eye is the rotation matrix which describes the movement of the pupil with respect to the head. Lastly, the rotation R bias is due to removing the effect of camera placement. Pose values are with respect to the camera coordinate system, therefore different camera placement cause different pose and consequently different R pose. R bias is defined to translate any given pose with respect to hp bias. R bias rotation makes the pose values to be pose = [0 0 0] when the driver is in normal distance from windshield, frontal face and looking forward. This step makes the framework independent of camera placement in different situation. hp bias can be obtained from the first few frames as an initialization step. r final the final gaze vector as computed by eq
4 In order to classify the gaze zones, we find the intersection of vector r ref and the plane(s) defined in Sec. III-B. The classification was done by Matlab lib SVM toolbox [17]. Training the system requires the coordinate of intersection points along with their labels (gaze zone). Due to linear separability of data in Fig. 7 linear multi class SVM is suitable to avoid overfitting. C. Learning Base Method This section is a review of the gaze zone estimation relying on learning based algorithm. The original work [9] employs Random forest classifier and head pose, horizontal and vertical gaze angles as features. Horizontal gaze is estimated by assuming the angle subtended by an eye in horizontal direction and location of the pupil on the image plane. Vertical gaze on the other hand is modeled as the area of upper eyelid contour. Detailed description and mathematical model of this approach can be found in [9]. For the sake of comparison we used the same features and linear SVM classifier. As we will see in the result section the accuracy is close to the original work. D. Hybrid gaze zone estimation framework Finally, fusion of the geometrical and data driven approach is described here. In order to do it we combin the features obtained from both methods and input them to the SVM classifier. For each frame we input X = [X1 X2] where X1 is the intersection point of gaze vector with the closest plane, and X2 contains head pose, and horizontal and vertical eye gaze surrogates for each eye as described in Sec. III-C. IV. EXPERIMENTAL EVALUATIONS TABLE I: Dataset Summary Data set Dataset 1 Dataset 2 Environment Urban & Freeway Urban & Freeway Total frames Total Annotated Frames Used Frames Number of gaze zone 6 6 This section evaluates the accuracy of our system in different tasks. Two sets of data are collected at the Laboratory for intelligence and safe automobiles. The data included two continuous sessions of naturalistic driving with different drivers, different cars and different driving locations. The data is collected with two cameras, one mounted near the rear-view mirror and one near the A-pillar. In this paper we only used the data from the first camera. The data is labeled manually into 6 categories as shown in Fig. 1. In addition, we excluded the frames, if any one of the following occurs: a) blink, b) transitioning between different activities, c) head out of camera range or d) landmarks and pose information not available. In addition, the data for looking Forward are down-sampled in order to have similar number of examples for each class. TABLE II: Number of activity per gaze zone and number of occurrence for each zone class after downsampling the forward zone. It shows the occurrence all zones is about the same in both datasets. Gaze Zone Dataset 1 Dataset 2 Activity # of Frame Activity # of Frame Forward Right Mirror Left Mirror Informative Panel Rearview Mirror Speedometer In order to ensure that evaluation is performed on temporally separated frames, each dataset is split by sorting the frames for each gaze zone in time and pick the first 80 percent as training and the rest for testing data. Therefore the training and testing data are well separated temporally. Table I summarizes the total number of frames collected along with number of annotated frames for each dataset. Table II provides number of activities recorded for each given gaze zone. Each activity is considered to be the period the driver is looking to the same gaze zone. We evaluate the proposed framework by conducting an experiment on the mentioned datasets. As we discussed in III-B the final gaze vector is affected based on head movement, eye movement, and bias removal. The contribution of head and eye information is shown in Table III. There is a 20 % improvement in accuracy when adding the rotation matrix R Eye for calculating gaze vector. This factor becomes more crucial for classes that have similar head poses with different eye direction (e.g. looking forward and speedometer). One of the key goals of this paper was to analyze the performance of the new multi-plane system. The system is tested as different number of planes added to the framework. Table IV shows the performance of system when more planes are added to the system. Having all four planes described in Sec. III-B, produces the best results with an average of 75% accuracy. Also, it worth nothing the 5 % improvement using four planes for vs. one plane in both our datasets. Optimizing the planes for various car models and also defining more planes (ideally one for each gaze zone) will enhance the system performance. Lastly, Tabel V shows the performance of a hybrid system of including the geometrical features in the learning based framework. The first row shows the accuracies of 93.72% and 84 % for datasets 1 and dataset 2 respectively without any plane( only head pose and gaze values feed to classifier). These results are similar to the reported accuracy of 93.6 % of the original work using random forest classifier. Now, by adding multi- plane system, the accuracy improve by 2 % and 4% in each dataset. From the result we can see the accuracy increases significantly from multi-plane system to learning based system compared to combining the two methods together. However, the geometric method is not computationally expensive and has its own advantages. So, it 658
5 Fig. 7: Projection of gaze vector on single plane (only the windshield plane ) on the left and the projection of gaze on three planes (blue: windshield, green: right mirror and red: left mirror) on the right. As we can see the multi-plane framework has an advantage for classes with similar head pose. Considering the Looking Forward and Right Mirror gaze intersection, the multi-plane framework give more separability between the two class. is useful to consider it as a hybrid model. Table VI shows the confusion matrix using only the geometric method and Table VII the confusion matrix for the hybrid system. Note that in Tables VI and VII, the numbered gaze zones are associated with the following gaze zones in this order: Forward, Right Mirror, Left Mirror, Informative Panel, Rearview Mirror, and Speedometer. Direct comparison of the results shown in this work cannot be made to results reported in literature mainly because each literary work evaluates on personal data sets. Such comparisons can be made only when naturalistic driving datasets for gaze estimation, similar to VIVA face detection and head pose data set [18], are available to benchmark different approaches. TABLE III: Effect of Eye information for gaze detection Accuracy Method Dataset 1 Head + 1 Plane 50.9 % Head + 3 Plane % Head + Eye + 1 plane % TABLE VI: Confusion matrix for dataset 1 using 4 planes True Gaze Zone Recognized Gaze Zone TABLE VII: Confusion matrix for hybrid method for dataset 1 using 4 planes and features found in Sec. III-C True Gaze Zone Recognized Gaze Zone TABLE IV: Geometric Method Result Accuracy Accuracy method Dataset plane % % 3 Plane 74.5 % 72.3 % 4 plane % 74 % TABLE V: Hybrid system method Dataset 1 2 LBM+ 0 plane % 84 % LBM+3 Plane % 87 % LBM+ 4 plane % 88.4 % V. CONCLUDING REMARKS Vision based gaze estimation is a fundamental building block for the development of driver assistance systems and highly automated vehicles; its potential application include detecting non-driving secondary task, measuring driver s awareness of the surrounding environment, predicting maneuvers, etc. In literature, much progress and performance improvement has been shown with personalized gaze estimation models based on machine learning approaches with little to no use of the physical constraints of the car. Towards this end, this study introduced a new geometrical system for gaze estimation by exploiting the geometrical constraints of the car. The geometrical method is based on knowing the 3-D gaze of the driver and defining multiple planes representing gaze zones of interest. This work showed promising results going from one plane to multiple planes for gaze zone estimation and showed further improvement with the geometric plus learning based hybrid approach. Future work is in the direction of using a 3-D model of a car to represent the geometrical constraints and further exploring fusions with learning based methods. 659
6 Fig. 8: Each row of images shows different activities of the driver form same datasets. From left to right : Left Mirror, Right Mirror, Rearview mirror, and Radio. It also illustrates some of the difficulties for gaze estimation such as eye self occlusion for left mirror (first column), and pupil occlusion for radio (third column) ACKNOWLEDGMENT The authors would like to thank their colleagues, particularly Kevan Yuen for helping with the data collection process and Aida Khosroshahi for her comments and suggestions to improve this work. The authors gratefully acknowledge sponsorship from our industry partners. REFERENCES [1] National Highway Traffic Safety Administration, Distracted Driving: 2013 Data, in Traffic Safety Research Notes, April [2] National Highway Traffic Safety Administration, Policy Statement and Compiled FAQs on Distracted Driving, [3] C. Ahlstrom, K. Kircher, and A. Kircher. A gaze-based driver distraction warning system and its effect on visual behavior, IEEE Transactions on Intelligent Transportation Systems, [4] E. Ohn-Bar, A. Tawari, S. Martin and Mohan M. Trivedi, On Surveillance for Safety Critical Events: In-Vehicle Video Networks for Predictive Driver Assistance Systems, Computer Vision and Image Understanding, [5] F. Vicente, Z. Huang, X. Xiong, F. Torre, W. Zhang, and D. Levi. Driver Gaze Tracking and Eyes Off the Road Detection System. IEEE Transactions on Intelligent Transportation Systems, [6] N. Li, and C. Busso. Detecting Drivers Mirror-Checking Actions and Its Application to Maneuver and Secondary Task Recognition. IEEE Transactions on Intelligent Transportation Systems, [7] C. Gold, D. Damböck, L. Lorenz, and K. Bengler, Take over!? How long does it take to get the driver back into the loop?, Human Factors and Ergonomics Society Annual Meeting, [8] L.Fridman, P. Langhans, J. Lee, B. Reimer, and T. Victor, Owl and Lizard : Patterns of Head Pose and Eye Pose in Driver Gaze Classification, arxiv preprint arxiv, [9] A. Tawari, K. H. Chen and M. M. Trivedi, Where is the driver looking: Analysis of head, eye and iris for robust gaze zone estimation, International IEEE Conference on Intelligent Transportation Systems, [10] K. Yuen, S. Martin and M. M. Trivedi, On Looking at Faces in an Automobile: Issues, Algorithms and Evaluation on Naturalistic Driving Dataset, 23rd International Conference on Pattern Recognition (ICPR), [11] K. Yuen, S. Martin and M. M. Trivedi, Looking at Faces in a Vehicle: A Deep CNN Based Approach and Evaluation, IEEE Conference on Intelligent Transportation Systmes (ITSC), [12] A. Tawari and M. M. Trivedi, Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos, IEEE Intelligent Vehicles Symposium Proceedings, [13] T. Ishikawa, S. Baker, I. Matthews, and T. Kanade, Passive driver gaze tracking with active appearance models, 11th World Congress Intelligent Transport System, [14] L. Fridman, P. Langhans, J. Lee, and B. Reimer, Driver Gaze Region Estimation without Use of Eye Movement, IEEE Intelligent Systems, [15] Create and Connect with Unity. Unity. N.p., n.d. Web. 16 June [16] Vehicle Specs Database. FARO Technologies Inc. N.p., n.d. Web. 12 June [17] C.C. Chang and C.J. Lin. LIBSVM : a library for support vector machines, ACM Transactions on Intelligent Systems and Technology, [18] S. Martin, K. Yuen and M. M. Trivedi, Vision for Intelligent Vehicles and Applications (VIVA): Face Detection and Head Pose Challenge, IEEE Intelligent Vehicles Symposium (IV),
Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention
Analyzing the Relationship Between Head Pose and Gaze to Model Driver Visual Attention Sumit Jha and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The
More informationMonitoring Head Dynamics for Driver Assistance Systems: A Multi-Perspective Approach
Proceedings of the 16th International IEEE Annual Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, October 6-9, 2013 WeC4.5 Monitoring Head Dynamics for Driver
More informationA Study of Vehicle Detector Generalization on U.S. Highway
26 IEEE 9th International Conference on Intelligent Transportation Systems (ITSC) Windsor Oceanico Hotel, Rio de Janeiro, Brazil, November -4, 26 A Study of Vehicle Generalization on U.S. Highway Rakesh
More informationSelf Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation
For Intelligent GPS Navigation and Traffic Interpretation Tianshi Gao Stanford University tianshig@stanford.edu 1. Introduction Imagine that you are driving on the highway at 70 mph and trying to figure
More informationPedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016
edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract
More informationAutomatic Fatigue Detection System
Automatic Fatigue Detection System T. Tinoco De Rubira, Stanford University December 11, 2009 1 Introduction Fatigue is the cause of a large number of car accidents in the United States. Studies done by
More informationSnap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform
Snap-DAS: A Vision-based Driver Assistance System on a Snapdragon TM Embedded Platform Ravi Kumar Satzoda, Sean Lee, Frankie Lu, and Mohan M. Trivedi Abstract In the recent years, mobile computing platforms
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationVehicle Occupant Posture Analysis Using Voxel Data
Ninth World Congress on Intelligent Transport Systems, Chicago, Illinois, October Vehicle Occupant Posture Analysis Using Voxel Data Ivana Mikic, Mohan Trivedi Computer Vision and Robotics Research Laboratory
More informationMultimodal Classification of Driver Glance
Multimodal Classification of Driver Glance First Author Affiliation Address Email Second Author Affiliation Address Email Third author Affiliation Address Email Abstract This paper presents a multimodal
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More informationReview of distracted driving factors. George Yannis, Associate Professor, NTUA
Review of distracted driving factors George Yannis, Associate Professor, NTUA www.nrso.ntua.gr/geyannis Research Objectives Presentation Structure Research Objectives To provide a comprehensive picture
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationOn the Design and Evaluation of Robust Head Pose for Visual User Interfaces: Algorithms, Databases, and Comparisons
On the Design and Evaluation of Robust Head Pose for Visual User Interfaces: Algorithms, Databases, and Comparisons Sujitha Martin scmartin@ucsd.edu Shinko Y. Cheng sycheng@hrl.com Ashish Tawari atawari@ucsd.edu
More informationInformation and Communication Engineering July 9th Survey of Techniques for Head Pose Estimation from Low Resolution Images
Information and Communication Engineering July 9th 2010 Survey of Techniques for Head Pose Estimation from Low Resolution Images Sato Laboratory D1, 48-107409, Teera SIRITEERAKUL Abstract Head pose estimation
More informationAvailable online at ScienceDirect. Procedia Computer Science 50 (2015 )
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 50 (2015 ) 617 622 2nd International Symposium on Big Data and Cloud Computing (ISBCC 15) Monitoring Driver Head Postures
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationLand & Lee (1994) Where do we look when we steer
Automobile Steering Land & Lee (1994) Where do we look when we steer Eye movements of three subjects while driving a narrow dirt road with tortuous curves around Edinburgh Scotland. Geometry demanded almost
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda
More informationSHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues
SHRP 2 Safety Research Symposium July 27, 2007 Site-Based Video System Design and Development: Research Plans and Issues S09 Objectives Support SHRP2 program research questions: Establish crash surrogates
More informationAn Exploration of Computer Vision Techniques for Bird Species Classification
An Exploration of Computer Vision Techniques for Bird Species Classification Anne L. Alter, Karen M. Wang December 15, 2017 Abstract Bird classification, a fine-grained categorization task, is a complex
More informationLocating ego-centers in depth for hippocampal place cells
204 5th Joint Symposium on Neural Computation Proceedings UCSD (1998) Locating ego-centers in depth for hippocampal place cells Kechen Zhang,' Terrence J. Sejeowski112 & Bruce L. ~cnau~hton~ 'Howard Hughes
More informationVISUAL NAVIGATION SYSTEM ON WINDSHIELD HEAD-UP DISPLAY
VISUAL NAVIGATION SYSTEM ON WINDSHIELD HEAD-UP DISPLAY Akihiko SATO *, Itaru KITAHARA, Yoshinari KAMEDA, Yuichi OHTA Department of Intelligent Interaction Technologies, Graduate School of Systems and Information
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationTowards Automated Drive Analysis: A Multimodal Synergistic Approach
Proceedings of the 6th International IEEE Annual Conference on Intelligent Transportation Systems (ITSC 23), The Hague, The Netherlands, October 6-9, 23 WeB.2 Towards Automated Drive Analysis: A Multimodal
More informationPublished in: IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), 2014
Aalborg Universitet Attention estimation by simultaneous analysis of viewer and view Tawari, Ashish; Møgelmose, Andreas; Martin, Sujitha; Moeslund, Thomas B.; Trivedi, Mohan M. Published in: IEEE 17th
More informationAdvanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module
Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output
More informationFPGA Image Processing for Driver Assistance Camera
Michigan State University College of Engineering ECE 480 Design Team 4 Feb. 8 th 2011 FPGA Image Processing for Driver Assistance Camera Final Proposal Design Team: Buether, John Frankfurth, Josh Lee,
More informationAI & Machine Learning
AI & Machine Learning Application of Relevant Technologies for Indian Automotive Industry to address critical issues related to Road Safety Vinod Sood The Problem 5+ lakh accidents have occurred in 2015
More informationTracking driver actions and guiding phone usage for safer driving. Hongyu Li Jan 25, 2018
Tracking driver actions and guiding phone usage for safer driving Hongyu Li Jan 25, 2018 1 Smart devices risks and opportunities Phone in use 14% Other distractions 86% Distraction-Affected Fatalities
More informationLarge-Scale Traffic Sign Recognition based on Local Features and Color Segmentation
Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,
More informationReal-time Detection of Illegally Parked Vehicles Using 1-D Transformation
Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,
More informationVehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image
Vehicle Ego-localization by Matching In-vehicle Camera Images to an Aerial Image Masafumi NODA 1,, Tomokazu TAKAHASHI 1,2, Daisuke DEGUCHI 1, Ichiro IDE 1, Hiroshi MURASE 1, Yoshiko KOJIMA 3 and Takashi
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational
More information2 OVERVIEW OF RELATED WORK
Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method
More informationMultiple Scale Faster-RCNN Approach to Driver s Cell-phone Usage and Hands on Steering Wheel Detection
2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops Multiple Scale Faster-RCNN Approach to Driver s Cell-phone Usage and Hands on Steering Wheel Detection T. Hoang Ngan Le, Yutong
More informationReal Time Eye s Off the Road Detection System for Driver Assistance
Real Time Eye s Off the Road Detection System for Driver Assistance Manthri Lavanya & C.Ravi Teja 1 M.Tech,Assistant Professor, Department of ECE, Anantha Lakshmi Institute of Technology, Anantapur, Ap,
More informationReal Time Eye s off the Road Detection System for Driver Assistance
Real Time Eye s off the Road Detection System for Driver Assistance Bandi Ramagopal M. Tech Student, Department of ECE, Shri Sai Institute of Engineering and Technology, Anantapur (dt) A.P. Abstract Automated
More information9th Intelligent Ground Vehicle Competition. Design Competition Written Report. Design Change Report AMIGO
9th Intelligent Ground Vehicle Competition Design Competition Written Report Design Change Report AMIGO AMIGO means the friends who will join to the IGV Competition. Watanabe Laboratory Team System Control
More informationLaserscanner Based Cooperative Pre-Data-Fusion
Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel
More informationMachine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices
Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving
More informationMULTI-VIEW FACE DETECTION AND POSE ESTIMATION EMPLOYING EDGE-BASED FEATURE VECTORS
MULTI-VIEW FACE DETECTION AND POSE ESTIMATION EMPLOYING EDGE-BASED FEATURE VECTORS Daisuke Moriya, Yasufumi Suzuki, and Tadashi Shibata Masakazu Yagi and Kenji Takada Department of Frontier Informatics,
More informationPupil Localization Algorithm based on Hough Transform and Harris Corner Detection
Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection 1 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: zh_lian@cqut.edu.cn
More informationCHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION
122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed
More informationComputing the relations among three views based on artificial neural network
Computing the relations among three views based on artificial neural network Ying Kin Yu Kin Hong Wong Siu Hang Or Department of Computer Science and Engineering The Chinese University of Hong Kong E-mail:
More informationA Hybrid Face Detection System using combination of Appearance-based and Feature-based methods
IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri
More informationAn Automatic Face Recognition System in the Near Infrared Spectrum
An Automatic Face Recognition System in the Near Infrared Spectrum Shuyan Zhao and Rolf-Rainer Grigat Technical University Hamburg Harburg Vision Systems, 4-08/1 Harburger Schloßstr 20 21079 Hamburg, Germany
More informationReal-time Object Detection CS 229 Course Project
Real-time Object Detection CS 229 Course Project Zibo Gong 1, Tianchang He 1, and Ziyi Yang 1 1 Department of Electrical Engineering, Stanford University December 17, 2016 Abstract Objection detection
More information3D Active Appearance Model for Aligning Faces in 2D Images
3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active
More informationDesign Considerations And The Impact of CMOS Image Sensors On The Car
Design Considerations And The Impact of CMOS Image Sensors On The Car Intuitive Automotive Image Sensors To Promote Safer And Smarter Driving Micron Technology, Inc., has just introduced a new image sensor
More informationA Street Scene Surveillance System for Moving Object Detection, Tracking and Classification
A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi
More informationAbstract. 1 Introduction. 2 Motivation. Information and Communication Engineering October 29th 2010
Information and Communication Engineering October 29th 2010 A Survey on Head Pose Estimation from Low Resolution Image Sato Laboratory M1, 48-106416, Isarun CHAMVEHA Abstract Recognizing the head pose
More informationEvaluating Measurement Error of a 3D Movable Body Scanner for Calibration
Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration YU-CHENG LIN Department of Industrial Engingeering and Management Overseas Chinese University No. 100 Chiaokwang Road, 408, Taichung
More informationPanoramic Stitching for Driver Assistance and Applications to Motion Saliency-Based Risk Analysis
Proceedings of the 16th International IEEE Annual Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, October 6-9, 2013 MoD3.1 Panoramic Stitching for Driver Assistance
More informationII. DESCRIPTION OF THE PROBLEM The tasks undertaken for this project are two-fold: (i)
VISION-BASED METHODS FOR DRIVER MONITORING Eric Wahlstrom, Osama Masoud, and Nikos Papanikolopoulos, Senior Member, IEEE Department of Computer Science and Engineering, University of Minnesota, USA Abstract
More informationCONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE
National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationConcentration Reminder: Distraction and Drowsiness Detection for Computer Users
Concentration Reminder: Distraction and Drowsiness Detection for Computer Users Rui Wu Lisa Palathingal Sergiu Dascalu Frederick C. Harris, Jr Department of Computer Science and Engineering University
More informationPassive driver gaze tracking with active appearance models
Carnegie Mellon University Research Showcase @ CMU Robotics Institute School of Computer Science 2004 Passive driver gaze tracking with active appearance models Takahiro Ishikawa Carnegie Mellon University
More informationVisible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness
Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com
More informationTo Appear in IEEE International Symposium on Intelligent Vehicles in Parma, Italy, June 2004.
To Appear in IEEE International Symposium on Intelligent Vehicles in Parma, Italy, June 2004. Occupant Posture Analysis using Reflectance and Stereo Images for Smart Airbag Deployment Stephen J. Krotosky
More informationDSRC Field Trials Whitepaper
DSRC Field Trials Whitepaper August 19, 2017 www.cohdawireless.com 1 Overview Cohda has performed more than 300 Dedicated Short Range Communications (DSRC) field trials, comparing DSRC radios from multiple
More informationConsistent Line Clusters for Building Recognition in CBIR
Consistent Line Clusters for Building Recognition in CBIR Yi Li and Linda G. Shapiro Department of Computer Science and Engineering University of Washington Seattle, WA 98195-250 shapiro,yi @cs.washington.edu
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More informationCohda Wireless White Paper DSRC Field Trials
Cohda Wireless White Paper DSRC Field Trials Copyright Cohda Wireless Pty Ltd ABN 84 107 936 309 Cohda Wireless Pty Ltd 82-84 Melbourne Street North Adelaide, SA 5006 Australia P +61 8 8364 4719 F +61
More information3D Digitization of a Hand-held Object with a Wearable Vision Sensor
3D Digitization of a Hand-held Object with a Wearable Vision Sensor Sotaro TSUKIZAWA, Kazuhiko SUMI, and Takashi MATSUYAMA tsucky@vision.kuee.kyoto-u.ac.jp sumi@vision.kuee.kyoto-u.ac.jp tm@i.kyoto-u.ac.jp
More informationDistracted Driving & Voice-to-Text Overview. Christine Yager Texas A&M Transportation Institute
Distracted Driving & Voice-to-Text Overview Christine Yager Texas A&M Transportation Institute Definition Driver distraction is a diversion of attention away from activities critical for safe driving toward
More informationCollision Detection of Cylindrical Rigid Bodies for Motion Planning
Proceedings of the 2006 IEEE International Conference on Robotics and Automation Orlando, Florida - May 2006 Collision Detection of Cylindrical Rigid Bodies for Motion Planning John Ketchel Department
More informationReal Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction
Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction Vo Duc My and Andreas Zell Abstract In this paper, we present a real time algorithm for mobile
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationSpatio temporal Segmentation using Laserscanner and Video Sequences
Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D 89081
More informationComments on Consistent Depth Maps Recovery from a Video Sequence
Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing
More informationCommercial Vehicle Safety Alliance DISTRACTIVE DRIVING
Commercial Vehicle Safety Alliance DISTRACTIVE DRIVING MMTA/APTA Transportation Safety Conference September 23 & 24, 2013 Michael Irwin, CDS CDT Director, Driver & Training Programs 1 Curbing Distracted
More informationA Novel Smoke Detection Method Using Support Vector Machine
A Novel Smoke Detection Method Using Support Vector Machine Hidenori Maruta Information Media Center Nagasaki University, Japan 1-14 Bunkyo-machi, Nagasaki-shi Nagasaki, Japan Email: hmaruta@nagasaki-u.ac.jp
More informationTHE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA
THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA Hwadong Sun 1, Dong Yeop Kim 1 *, Joon Ho Kwon 2, Bong-Seok Kim 1, and Chang-Woo Park 1 1 Intelligent Robotics Research Center,
More information#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS
#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS Final Research Report Luis E. Navarro-Serment, Ph.D. The Robotics Institute Carnegie Mellon University Disclaimer The contents
More informationAUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA
F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The
More informationDynamic Human Fatigue Detection Using Feature-Level Fusion
Dynamic Human Fatigue Detection Using Feature-Level Fusion Xiao Fan, Bao-Cai Yin, and Yan-Feng Sun Beijing Key Laboratory of Multimedia and Intelligent Software, College of Computer Science and Technology,
More information3D Corner Detection from Room Environment Using the Handy Video Camera
3D Corner Detection from Room Environment Using the Handy Video Camera Ryo HIROSE, Hideo SAITO and Masaaki MOCHIMARU : Graduated School of Science and Technology, Keio University, Japan {ryo, saito}@ozawa.ics.keio.ac.jp
More informationITS (Intelligent Transportation Systems) Solutions
Special Issue Advanced Technologies and Solutions toward Ubiquitous Network Society ITS (Intelligent Transportation Systems) Solutions By Makoto MAEKAWA* Worldwide ITS goals for safety and environment
More informationRelating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps
Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,
More informationVision based autonomous driving - A survey of recent methods. -Tejus Gupta
Vision based autonomous driving - A survey of recent methods -Tejus Gupta Presently, there are three major paradigms for vision based autonomous driving: Directly map input image to driving action using
More informationAPPLICATION OF AERIAL VIDEO FOR TRAFFIC FLOW MONITORING AND MANAGEMENT
Pitu Mirchandani, Professor, Department of Systems and Industrial Engineering Mark Hickman, Assistant Professor, Department of Civil Engineering Alejandro Angel, Graduate Researcher Dinesh Chandnani, Graduate
More informationMonitoring surrounding areas of truck-trailer combinations
Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,
More informationFace Recognition using SURF Features and SVM Classifier
International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 8, Number 1 (016) pp. 1-8 Research India Publications http://www.ripublication.com Face Recognition using SURF Features
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationREINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION
REINFORCEMENT LEARNING: MDP APPLIED TO AUTONOMOUS NAVIGATION ABSTRACT Mark A. Mueller Georgia Institute of Technology, Computer Science, Atlanta, GA USA The problem of autonomous vehicle navigation between
More information(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than
An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,
More informationMulti-projector-type immersive light field display
Multi-projector-type immersive light field display Qing Zhong ( é) 1, Beishi Chen (í ì) 1, Haifeng Li (Ó ô) 1, Xu Liu ( Ê) 1, Jun Xia ( ) 2, Baoping Wang ( ) 2, and Haisong Xu (Å Ø) 1 1 State Key Laboratory
More informationCriminal Identification System Using Face Detection and Recognition
Criminal Identification System Using Face Detection and Recognition Piyush Kakkar 1, Mr. Vibhor Sharma 2 Information Technology Department, Maharaja Agrasen Institute of Technology, Delhi 1 Assistant Professor,
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationSpatio-Temporal Vehicle Tracking Using Unsupervised Learning-Based Segmentation and Object Tracking
Spatio-Temporal Vehicle Tracking Using Unsupervised Learning-Based Segmentation and Object Tracking Shu-Ching Chen, Mei-Ling Shyu, Srinivas Peeta, Chengcui Zhang Introduction Recently, Intelligent Transportation
More informationCreating a distortion characterisation dataset for visual band cameras using fiducial markers.
Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council
More information3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit
3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing
More informationEstimating Speed of Vehicle using Centroid Method in MATLAB
Estimating Speed of Vehicle using Centroid Method in MATLAB Pratishtha Gupta Assistant Professor Banasthali University Jaipur, India G N Purohit Professor Banasthali University Jaipur, India Manisha Rathore
More informationAmy Schick NHTSA, Occupant Protection Division April 7, 2011
Amy Schick NHTSA, Occupant Protection Division April 7, 2011 In 2009, nearly 5,550 people were killed and an additional 448,000 were injured in crashes involving distraction, accounting for 16% of fatal
More informationREAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU
High-Performance Сomputing REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU P.Y. Yakimov Samara National Research University, Samara, Russia Abstract. This article shows an effective implementation of
More informationTraffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers
Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane
More information