Vision-based bicycle / motorcycle classification

Size: px
Start display at page:

Download "Vision-based bicycle / motorcycle classification"

Transcription

1 Vision-based bicycle / motorcycle classification Stefano Messelodi a, Carla Maria Modena a a ITC-irst, Via Sommarive 18, I Povo, Trento, Italy Gianni Cattoni b b Università degli Studi di Trento, Via Mesiano 77, Trento, Italy Abstract We present a feature-based classifier that distinguishes bicycles from motorcycles in real-world traffic scenes. The algorithm extracts some visual features focusing on the wheel regions of the vehicles. It splits the problem into two sub-cases depending on the computed motion direction. The classification is performed by non-linear Support Vector Machines. Tests lead to a successful vehicle classification rate of 96.7% on video sequences taken from different road junctions in an urban environment. Key words: Traffic Monitoring, Feature Extraction, Support Vector Machine, Vehicle Classification, Image Analysis 1 Introduction Image analysis techniques have been shown to be effective and cost competitive in various traffic control applications (Kastrinaki et al. 2003, Foresti et al. Tel.: ; fax: address: messelod@itc.it; (Stefano Messelodi). Preprint submitted to Elsevier Science 2 February 2007

2 2003, Hu et al. 2004). In spite of some drawbacks, mainly related to a dependence on scene illumination, vision-based systems offer several advantages over traditional traffic control techniques: low impact on the road infrastructure, low maintenance costs and the possibility for a remote operator to receive images. Furthermore, a vision-based system can be adapted to detect and classify particular kinds of vehicles on the basis of visual features. This is the case when discriminating between bicycles and motorcycles. This capability provides important information to traffic managers in order to evaluate the need to build bicycle lanes or to establish correlations between traffic and air or acoustic pollution. When necessary, bicycle counting is usually performed manually by transportation personnel, or automatically by means of special purpose-build equipment. For temporary sessions of data acquisition pneumatic rubber tube detectors placed across the road are often used. For continuous monitoring, permanent detectors are used, i.e. devices such as loop detectors and infrared or video detection systems. A comparison of different bicycle detection technologies is included in SRF Consulting Group (2003). Few vision-based algorithms have been proposed in literature (Dukesherer and Smith 2001, Rogers and Papanikolopoulos 2000) devoted to bicycle counting. The algorithm proposed by Rogers and Papanikolopoulos (2000) detects objects moving through the scene by means of a background differencing technique. The estimation of the movement direction enables their system to localize the wheels by searching for ellipses using the generalized Hough transform in the edge map. They claim they are able to count the number of bicycles on a trail with an accuracy up to 70%, for a variety of weather conditions. Furthermore, the authors refer to a previous method (Rogers and Papanikolopoulos 1999) where bicycles were detected in the image by a template-matching technique, although concluding that the Hough based method is a better alternative, mainly for computational reasons. Dukesherer and Smith (2001) propose to utilize a hybrid approach: firstly, the Hough-Transform for circles, in order to localize wheel regions in the image, and then the Hausdorff distance for matching the candidates with a simple bicycle template, where a bicycle is described as two arcs of circle separated by a approximately known distance. The knowledge about camera set-up along with expected bicycle position is used to constrain the radius range. They report a 96% detection rate on a set of 25 images. In both methods bicycles are viewed from the side of a bicycle lane and the authors make no mention of problems related to the discrimination from motorcycles. As far as we know, the problem of distinguishing bicycles from motorcycles has not been investigated in the literature, at least using image analysis techniques. Two are the major difficulties we have to cope with: the wide range of visual appearances of the pattern to be classified and their typical low resolution, e.g. 2

3 50 60 pixels, in a standard traffic surveillance system. The former problem depends both on the different poses of the vehicle with respect to the camera, and on the variety of bicycle and motorcycle models. The algorithm proposed in this paper is a module of a video-based traffic monitoring system, named scoca (Messelodi et al. 2005b) developed for the real-time collection of traffic parameters at road intersections. The functionalities provided by the system are: vehicle detection, counting, classification, average speed estimation, and compilation of the turning movement table for each monitored intersection. Vehicle classes recognized by the system are: bicycle, motorcycle, car, van, lorry, urban bus and extra-urban bus. The classification is mainly based on the comparison of the detected vehicles with a set of 3D models. As a matter of fact, certain classes are not distinguishable using only macroscopic data about shape and size. This is the case of motorcycle and bicycle. For this reason scoca is endowed with specialized classifiers in order to solve these ambiguities. The algorithm proposed in this paper implements one such classifier which exploits the tracking results provided by scoca, e.g. the location of the vehicle in the image and its direction of movement. The underlying idea is to divide the problem into two different contexts, or subproblems, in order to increase the uniformity of the visual appearance of the vehicles to be classified. The purpose is to roughly divide the set of vehicle images into two groups, the first depicting a side view of the vehicle and the second containing front or rear views. The context switching is simply controlled by a threshold on the estimated direction of motion, with respect to the camera optical axis. For each subproblem a set of features are extracted from the image and used to discriminate between bicycle and motorcycle. In particular, we propose to focus the feature selection to the wheels region of the vehicle. First, we show that this is a good choice through the implementation and evaluation of two heuristic based classifiers, one for each subproblem. Next, we present two Support Vector Machine classifiers fed with the projection profiles computed over the image region depicting the wheels of the vehicle. Experiments have been performed on real images coming from different junctions. The paper is organized as follows: the next Section briefly describes the traffic monitoring system scoca. Section 3 introduces the feature selection upon which relies a first classifier dedicated to distinguish between motorcycles and bicycles, Section 4 presents the use of Support Vector Machines in this problem. Section 5 shows the result of the experimental sessions. Finally, Section 6 provides some considerations and conclusions. 3

4 2 The traffic monitoring system The aim of scoca is to collect statistical information about traffic flowing through road intersections. The system has been designed following modularity and flexibility criteria in order to work effectively at road junctions characterized by different topologies and different acquisition set-ups. For this purpose the operator provides the system with some initial data, i.e. the camera intrinsic and extrinsic parameters. The input sequence of images (with size ) is analyzed, at 25 frames per second, by two main subprocesses running in parallel. They are devoted, respectively, to the detection and tracking of objects crossing the scene and to the traffic parameter extraction for each of them (class, speed, path). In order to properly collocate the bicycle/motorcycle classifier in the whole system, both modules are briefly described below. More details are provided in Messelodi et al. (2005b). 2.1 Object detection and tracking Stationary or moving objects are detected by analyzing the difference between the current frame and a reference background image. This latter is computed and updated by means of a Kalman predictive filtering technique (Messelodi et al. 2005a). The image/background difference is then thresholded; very small foreground regions are filtered away and the remaining ones are grouped into hypothetical vehicles by considering their distance and the possible overlap with the expected position of the objects detected in the previous frame. The convex hull of each group of regions is a blob that represents the support set B of the object. This technique is typically robust and accurate but its significant computational load suggests not to apply it on a frame-by-frame basis, in such a way to leave room for other tasks required by a real-time traffic surveillance system, e.g. classification, detection of relevant traffic events. For this reason the frame differencing step is applied once every few frames and a more efficient tracking method is adopted, in the intermediate frames: each object is tracked by selecting in the current image a set of small regions which are easily identifiable in the subsequent frames, i.e. regions containing edges or corners. The correspondences among these regions between two successive images provide information about the object movement. The result of this module is a set of objects each one represented by a data structure that stores information about the different views of the object in the scene and its displacements between consecutive frames. 4

5 2.2 Parameter extraction The parameter extraction module analyzes the output of the previous module in order to provide a description in terms of class, speed and path for each detected and tracked object. For the purpose of this paper only the classification step needs a brief description. It works in two stages: a model-based classification step, followed by a feature-based one, when needed. The model-based classifier makes use of a set of 3D models which provide a rough description of the shapes of different vehicle categories. There are eight adopted models: a single model (called cycle) represents motorcycles and bicycles, three models for cars, two models for vans, and a single model represents lorries and buses. Actually, a model is defined to represent pedestrians but it is used only to detect false alarms. The 3D model classifier considers for each view of the moving object the best match with each 3D model placed in different positions and along different directions on the ground plane. The match score is computed as the overlap between the support set B j of the j-th view of the object and the projection of the i-th model onto the image plane. The object is then assigned to the 3D model having the highest average score computed over the set of its views. Focusing on the best 3D model, a list is associated to each object view containing the following data: the estimated position and orientation on the ground plane, the overlap score, and a number in the interval (0, 1] (inside factor) that specifies what fraction of the projected model is visible in the image. If the best model corresponds to a single vehicle category, then the classification terminates straightforwardly. Otherwise, specialized classifiers are applied in order to determine the correct class among the vehicle categories associated to that 3D model. These classifiers use specific features extracted from the views of the vehicle. 3 Feature selection At the end of the model-based classification phase, an object descriptor stores the following information for each view of the vehicle: B: the support set of the unknown vehicle, i.e. a binary mask corresponding to the convex hull of the vehicle in the image; I: the subimage of the input image corresponding to the bounding box of B; D: the absolute difference map between I and the background image in the 5

6 same location; the displacement vector in the image plane of the vehicle blob with respect to the previous frame; (x 0, y 0, θ): the estimated position and direction of the vehicle on the road; the score of the model-based classification step; the inside factor, i.e. the fraction of the real-world positioned model that is visible in the image. An example of this information is reported in Figure 1. The world coordinate system is chosen by having the (X, Y ) plane in correspondence to the road plane, the origin at the vertical projection of the camera optical center to (X, Y ), and the Y axis directed as the projection of the optical axis onto the road plane. Fig. 1. Information associated to a view of a vehicle labeled as cycle by the model-based classifier: region I of the input image, background difference map D, support set B (here its boundary overlapped to I). Other information: = ( 12, 23), (x 0, y 0, θ) = (1 504, , 50 ), model-based classification score = 0.85, inside factor = 1.0 In order to deal with the great variability of the visual appearance of a motor/bicycle, mainly due to the different perspectives under which it can be observed by the camera, we choose to distinguish two different contexts according to the moving direction of the vehicle, θ, with respect to the Y axis. If θ is close to the Y axis direction, i.e. the angle between them is less than a fixed value T θ, a front or rear view of the moving vehicle appears in the image (Figure 2,a-b). Otherwise, the image depicts a side view of the vehicle (Figure 2,c-d). The selection of the threshold values will be discussed at the end of this section. Side view. In this case, the underlying idea of the algorithm is that the luminance of the region inside the wheels of a bicycle is more similar to the background, with respect to the same region of a motorcycle. The purpose is 6

7 (a) (b) (c) (d) Fig. 2. (a) Front view of a bicycle with estimated motion direction θ = 200 in the real world. (b) Rear view of a motorcycle: θ = 20. (c) Side view of a bicycle: θ = 90. (d) Side view of a motorcycle: θ = 100 to localize the wheel regions in the image and compute the average value of D for those regions. The feature extraction proceeds as follows: (1) let ω be the direction of the displacement vector in the image plane; (2) compute the direction ω 0 of the minimum bounding rectangle 1 (MBR) of the support set B with direction constrained in a centered neighborhood (±5 ) of ω; (3) estimate the location of the regions R1, R2 corresponding to the wheels: compute the projections p 0 and p 1 of the subimage of D in B, respectively along the direction ω 0 and its normal; in order to reduce boundary noise (mainly due to shadow) in both the directions, in particular in the wheels area, consider the portion p 0 of p 0 between the 3rd and 100th percentile and the portion p 1 of p 1 between the 1st and 99th percentile; 1 MBR (ω,δ) (B) is the rectangle with minimum area which contains all the points of the set B and has a side with slope in the range (ω δ, ω + δ) 7

8 the wheel regions are approximated by two rectangular regions R1 and R2 (Figure 3), obtained from the intersection of the backprojection of the first third of p 0 with the backprojection of the first and the last third of p 1 (one third has been estimated by observing the proportion and the position of the wheels in a set of ridden motor/bicycles side images); (4) compute the average of D in R1 B and R2 B, yielding two values S 1 and S 2, respectively, that act as scores for the classification. If both the scores S 1 and S 2 are lower than a given threshold T s, then the object view is classified as a bicycle; otherwise as a motorcycle. Fig. 3. Side views. The projections p 0 and p 1 of the difference map, along the direction ω 0 and its normal, in order to determine the wheel regions R1 and R2 (boxes). Left (bicycle): ω 0 = 85.3 ; the extracted features lead to S 1 = 24.3 and S 2 = Right (motorcycle): ω 0 = 89.7 ; the extracted features lead to S 1 = 79.8 and S 2 = 75.6 Front/rear view. The idea underlying the classification of a front or rear view of the vehicle, is that the thickness of the tires is typically wider for motorcycles than for bicycles, and this fact can be detected by analyzing a profile obtained from the image portion that contains a wheel of the vehicle (actually the wheel which is closest to the camera). The feature extraction algorithm works as follows: (1) taking advantage of the information about position and direction of the vehicle on the road plane, and the expected displacement of wheels with respect to the vehicle middle point, the real-world location of the wheel closest to the camera is estimated. To focus the analysis to the bottom part of the wheel, a vertical segment of fixed height (40 cm in the experiments) is virtually placed in that location and its back projection in the 8

9 image plane is computed. Let H w be its length, in pixels, on the image plane; (2) compute the average value of D for each row of the support set B, and let R b be the first row, starting from the bottom, where the computed value is greater than a given threshold T p ; let R t be the row with index R b H w ; (3) considering the subregion B of B whose pixels have a row index in the interval [R b, R t ] (gray surrounded regions in Figures 4 and 5), project onto the horizontal axis the values of D within the support set B ; (4) analyze the profile of this projection (the bottom ones in Figures 4 and 5) that usually presents a peak originated by the lower part of the wheel. The width of the peak W p is estimated (see Figures 4 and 5). Fig. 4. A front view of a bicycle along with the associated projections. On the left, portion I of the input image; in the middle D, the difference with the background, and the contour of the support set B. The B region (front wheel) is determined from the profile on the right hand side (average of D values row by row inside B). The profile on the bottom, produced by B is then analyzed: in this example H w = 10 and the peak width W p has been estimated to be 3 pixels If W p is lower than a threshold T wp that linearly depends on the distance of the vehicle from the camera, then this view is classified as a bicycle; otherwise, as a motorcycle. The values of the thresholds involved in this first classification algorithm (both for side and front/rear view) have been set during a parameters estimation stage, on a training set of labeled vehicle views. Only the views having score and inside factor greater than prefixed values (0.3 and 0.95, respectively) are considered. A range of reasonable values have been assigned to T θ, T s, T p, T wp and the configuration that maximized the classification rate on a training set of labeled vehicle views has been selected. The percentile values, used to remove noise at the tails of the projections, have been estimated by comparing, for a small set of images, the projections of the automatically extracted blobs 9

10 Fig. 5. A rear view of a motorcycle along with the associated projections. On the left I; in the middle D and the contour of B and B. In this example H w is also 10 pixels, but the peak width W p results to be 11 pixels and those of manually labeled blobs. The classification performance of this algorithm, presented in Section 5, suggests that the considered visual cues have a sufficiently good discrimination power. Therefore they are adopted in the selection of the features of the SVM classifiers described in the following section. 4 The SVM bicycle / motorcycle classifier Support Vector Machines are based on the learning theory developed by Vapnik (1995). They are a method introduced for function estimation, times series analysis, variance analysis and have recently shown a great potential for solving several visual learning and pattern recognition problems (Lee and Verri 2003). Focusing on SVMs for two-class classification, a support vector classifier can be seen geometrically as an optimal separating surface (decision surface) between two classes. Theory on SVM can be found in literature (e.g. Schölkopf and Smola (2002)). We adopted the SVM technique to model the boundary of the two classes bicycle and motorcycle. It can be formulated as a classification problem of 3D objects, where only a limited number of 2D views are presented during the training phase. We concentrate on the use of a non-linear classifier defined by a Gaussian kernel function with parameter σ. Another parameter of the SVM is the regularization term C. The input to the classifier is a feature vector x related to the visual cues discussed in Section 3. Two SVMs have been trained to classify side views and 10

11 front/rear views respectively. The information related to the wheel regions in the vehicle image is captured by two skewed projection profiles (P 0 and P 1 ). Their computation is different in the two contexts, and consists in the following steps (refer to Figures 6 and 7): (1) let ω be the direction of the displacement vector in the image plane; (2) let ω 0 be the slope of MBR (ω,5 )(B); let V 0, V 1, V 2, V 3 be the vertexes of the MBR, counterclockwise, where the side V 0 V 1 is the lower side in image coordinates having slope ω 0 for side views, and slope orthogonal to ω 0 for front/rear views; (3) extract two rectangular zones Z i, i = 1, 2 from the MBR whose vertexes V 0, V 1, V 2, V 3 are defined parametrically with respect to factors f i, as V 2 = V 1 + f i (V 2 V 1 ) and V 3 = V 0 + f i (V 3 V 0 ); (4) compute the projections p 0 by projecting the region of D enclosed in Z 1 along the direction ω 0, and p 1 by projecting the region of D enclosed in Z 2 along the direction normal to ω 0 ; (5) P 0 and P 1 are then obtained by the quantization of p 0 and p 1 into fixed dimensions D 0 and D 1. The feature vector x is composed by the concatenation of P 0 and P 1, and represent a generalization of the features used in the previous classifier. Fig. 6. Computation of projection p 0 and p 1 for a side view of a motorcycle. The MBR and the zones Z 1 and Z 2 are highlighted The machine training has been performed using two sets of vehicles views coming from distinct traffic sequences. The first set has been split, according to the threshold T θ, to build the training set for the side views and for the front/rear view classifiers. Analogously, the second set has been split to generate two cross-validation sets. Next, the four sets have been filtered by removing the object views which have a score or an inside factor less than specific thresholds (0.3 and 0.95, 11

12 Fig. 7. Computation of projection p 0 and p 1 for a rear view of a bicycle. The MBR and the zones Z 1 and Z 2 are highlighted respectively). In fact they typically give rise to outliers. The classifiers have been trained using different values for the thresholds T θ, f 1, f 2, D 0, D 1 and for the parameters σ and C. The range of variability for the threshold T θ, f 1, f 2, have been set around the values used in the early classifier (25, 0.33, 0.33). The configuration that maximized the classification accuracy on the cross validation set has been selected. 5 Experimental results The classifiers presented in the previous sections aim to assign an object view to one of the two categories bicycle/motorcycle. A single vehicle detected and tracked by scoca is typically composed of several views that can be used together to classify the vehicle. We adopt the following criterion to classify a vehicle as a function of the classification results of its views: - assign the vehicle to the class of the majority of its views; - if they are equally present, consider the sum of the scores associated to the views classified as bicycle and motorcycle, respectively, and assign the vehicle to the class having the highest score. All the experiments have been carried out using five real traffic sequences collected from two distinct road intersections. We run the scoca system on 12

13 the image sequences and collected the vehicle assigned to the cycle class by the model-based classification module. This set contains true bicycles and motorcycles along with misclassified vehicles (pedestrian, wheelchairs, prams, noise generated by shadows,... ) that have been manually excluded. We labeled all of the vehicles using a graphical tool that has been developed to speed up the ground-truth generation and the evaluation of the classifier accuracy. In the first experiment, aimed at verifying the discrimination ability of the selected visual features, the early classifier has been applied to a test set containing the vehicles detected in three sequences, while the other two sequences have been used to build the training set for the parameters estimation. The test set contains 45 bicycles and 144 motorcycles, each one described by its set of views, that range from 3 to 8 depending on the vehicle speed and trajectory through the scene. The total number of views is Table 1 reports the classification results, at vehicle level, showing the confusion matrix and the classification error rates. These figures motivated the use of the same kind of features, i.e. related to the wheels region, for the training of the SVM classifiers. The SVM classifiers have been tested both at view level and at vehicle level. Four sequences have been utilized to build the training set and one for the cross validation set, used to estimate the value of the parameters. The composition of the resulting training set is reported in Table 2. The classification accuracy has been measured using a leave-one-vehicle-out method on the training set. The reason for this choice, instead of the standard leave-one-out, is the prevention of a possible bias in the test results due to the relevant correlation that exists among different views of the same vehicle. Let T be the set of all views of all the vehicles in the training set. Let T i be the set of views of the i-th vehicle: then the machine is trained on the set T \ T i and tested on T i. Each element of T i is classified and the vehicle is assigned to a category according to the majority criterion. Table 3 reports the classification results of the two SVMs (side and front/rear) both at view level and at vehicle level. The number of support vectors is 256 for the SVM trained on side views, and 126 for the SVM trained on front/rear views. As expected the accuracy at vehicle level is higher thanks to the redundancy provided by the multiple views. The major source of classification errors is the inaccurate detection of the vehicle boundary in the previous localization and tracking steps. This is mainly due to the partial or total inclusion of the vehicle shadow in the blob, or to the inclusion of moving background regions, typically generated by moving leaves or their shadows. 13

14 6 Conclusions In this paper, we have described an algorithm for the discrimination between bicycles and motorcycles. It is part of a video-based traffic monitoring system that aims to detect, track and classify vehicles at urban road junctions. The algorithm is applied after a model-based classification, that is unable to discriminate between the two vehicle classes, but that ensures (with a certain confidence) that the vehicle belongs to one of the two considered classes. The visual features used by the classifier are computed starting from the vehicle image, the background image and an estimated position and orientation of the vehicle in the real world. This data is provided by other modules of the monitoring system. The algorithm focuses on the image regions that correspond to the wheels of the vehicle, and acts differently depending on the vehicle orientation with respect to the camera view (side or front/rear). The application of a rough classifier has shown that the selected zones and features are discriminant. Support Vector Machines have then been trained using analogous features, based on the skewed projection profiles of the lower part of the vehicle, leading to a global error rate of 6.3% at view level and 3.3% at vehicle level. These figures cannot be directly compared to other works as it is a relatively unexplored task. In fact, as far as we know, currently, no other monitoring system for urban junctions exists that is able to classify vehicles including the bicycle class. Considering that in traffic surveillance applications, aiming at collecting data for statistical purposes, a classification error rate around 5% is typically accepted, our results can be considered satisfactory. Future plans include the improvement of the classifiers input, mainly addressing the shadow detection problem and the management of dynamic background pixels, but also refining the estimation of the vehicle pose. Other visual features will be explored like the detection of the leg movement of cyclists. Moreover, the appropriateness of non visual features, like the average vehicle speed, and the utility of defining more contexts will be investigated. References Dukesherer, J., Smith, C., A hybrid Hough-Hausdorff method for recognizing bicycles in natural scenes. In IEEE International Conference on Systems, Man and Cybernetics. Foresti, G.L., Micheloni, C., Snidaro, L., Advanced visual-based traffic monitoring systems for increasing safety in road transportation. Advances in Transportation Studies International Journal, 1(1): Hu, W., Tan, T., Wang, L., Maybank, S., A survey on visual surveillance 14

15 of object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, 34(3): Kastrinaki, V., Zervakis, M., Kalaitzakis, K., A survey of video processing techniques for traffic applications. Image and Vision Computing, 21(4): Lee, S.-W., Verri, A. (Eds), Special Issue on Support Vector Machines for Computer Vision and Pattern Recognition. International Journal of Pattern Recognition and Artificial Intelligence, 17(3). Messelodi, S., Modena, C.M., Segata, N., Zanin, M., 2005a. A Kalman filter based background updating algorithm robust to sharp illumination changes. In 13th International Conference on Image Analysis and Processing, Cagliari, Italy. Messelodi, S., Modena, C.M., Zanin, M., 2005b. A Computer Vision System for the Detection and Classification of Vehicles at Urban Road Intersections. Pattern Analysis and Applications, 8(1 2): Rogers, S., Papanikolopoulos, N., A robust Video-Based Bicycle Counting System. In ITS America 9th Annual Meeting, Washington, DC. Rogers, S., Papanikolopoulos, N., Bicycle Counter. Technical Report MN/RC , Artificial Intelligence, Robotics, and Vision Laboratory, University of Minnesota. Schölkopf, B., Smola, A.J., Learning with Kernels. MIT Press, Cambridge, Massachusetts. SRF Consulting Group. Bicycle and Pedestrian Detection. Research Report 23330, US DoT FHW and Minnesota DoT. Vapnik, V.N., The Nature of Statistical Learning Theory. Springer- Verlag, New York. 15

16 List of Figures 1 Information associated to a view of a vehicle labeled as cycle by the model-based classifier: region I of the input image, background difference map D, support set B (here its boundary overlapped to I). Other information: = ( 12, 23), (x 0, y 0, θ) = (1 504, , 50 ), model-based classification score = 0.85, inside factor = (a) Front view of a bicycle with estimated motion direction θ = 200 in the real world. (b) Rear view of a motorcycle: θ = 20. (c) Side view of a bicycle: θ = 90. (d) Side view of a motorcycle: θ = Side views. The projections p 0 and p 1 of the difference map, along the direction ω 0 and its normal, in order to determine the wheel regions R1 and R2 (boxes). Left (bicycle): ω 0 = 85.3 ; the extracted features lead to S 1 = 24.3 and S 2 = Right (motorcycle): ω 0 = 89.7 ; the extracted features lead to S 1 = 79.8 and S 2 = A front view of a bicycle along with the associated projections. On the left, portion I of the input image; in the middle D, the difference with the background, and the contour of the support set B. The B region (front wheel) is determined from the profile on the right hand side (average of D values row by row inside B). The profile on the bottom, produced by B is then analyzed: in this example H w = 10 and the peak width W p has been estimated to be 3 pixels 9 5 A rear view of a motorcycle along with the associated projections. On the left I; in the middle D and the contour of B and B. In this example H w is also 10 pixels, but the peak width W p results to be 11 pixels 10 6 Computation of projection p 0 and p 1 for a side view of a motorcycle. The MBR and the zones Z 1 and Z 2 are highlighted 11 7 Computation of projection p 0 and p 1 for a rear view of a bicycle. The MBR and the zones Z 1 and Z 2 are highlighted 12 16

17 List of Tables 1 Classification result at vehicle level of the early classifier applied to three sequences coming from two different junctions Input to be classified with the SVMs extracted from four sequences taken from two different junctions 19 3 Error rates of the SVM classifiers for side views and for front views, at view level and at vehicle level, distinguishing on both the classes 20 17

18 Table 1 classification result input class nr bicycle motorcycle error rate bicycle % motorcycle % total % 18

19 Table 2 bicycle motorcycle total Side Views Front Views Total Views Total Vehicles

20 Table 3 global bicycle motorcycle error error error SVM on Side Views 6.2% 10.2% 4.3% SVM on Front Views 6.2% 6.0% 6.5% on view level 6.3% 9.5% 4.3% on vehicle level 3.3% 3.8% 3.0% 20

Classification of objects from Video Data (Group 30)

Classification of objects from Video Data (Group 30) Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time

More information

Detection and Classification of Vehicles

Detection and Classification of Vehicles Detection and Classification of Vehicles Gupte et al. 2002 Zeeshan Mohammad ECG 782 Dr. Brendan Morris. Introduction Previously, magnetic loop detectors were used to count vehicles passing over them. Advantages

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Cs : Computer Vision Final Project Report

Cs : Computer Vision Final Project Report Cs 600.461: Computer Vision Final Project Report Giancarlo Troni gtroni@jhu.edu Raphael Sznitman sznitman@jhu.edu Abstract Given a Youtube video of a busy street intersection, our task is to detect, track,

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

A Paper presentation on REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM

A Paper presentation on REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM A Paper presentation on REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM ABSTRACT This paper primarily aims at the new technique of video image processing used to solve problems

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Unconstrained License Plate Detection Using the Hausdorff Distance

Unconstrained License Plate Detection Using the Hausdorff Distance SPIE Defense & Security, Visual Information Processing XIX, Proc. SPIE, Vol. 7701, 77010V (2010) Unconstrained License Plate Detection Using the Hausdorff Distance M. Lalonde, S. Foucher, L. Gagnon R&D

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection

Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hu, Qu, Li and Wang 1 Research on Recognition and Classification of Moving Objects in Mixed Traffic Based on Video Detection Hongyu Hu (corresponding author) College of Transportation, Jilin University,

More information

Medical images, segmentation and analysis

Medical images, segmentation and analysis Medical images, segmentation and analysis ImageLab group http://imagelab.ing.unimo.it Università degli Studi di Modena e Reggio Emilia Medical Images Macroscopic Dermoscopic ELM enhance the features of

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Vehicle Detection under Day and Night Illumination

Vehicle Detection under Day and Night Illumination Proc. of ISCS-IIA99 Special session on vehicle traffic and surveillance Vehicle Detection under Day and Night Illumination R. Cucchiara, M. Piccardi 2 Dipartimento di Scienze dell Ingegneria Università

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Vehicle Detection under Day and Night Illumination

Vehicle Detection under Day and Night Illumination Vehicle Detection under Day and Night Illumination R. Cucchiara 1, M. Piccardi 2 1 Dipartimento di Scienze dell Ingegneria Università di Modena e Reggio Emilia Via Campi 213\b - 41100 Modena, Italy e-mail:

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Connected Component Analysis and Change Detection for Images

Connected Component Analysis and Change Detection for Images Connected Component Analysis and Change Detection for Images Prasad S.Halgaonkar Department of Computer Engg, MITCOE Pune University, India Abstract Detection of the region of change in images of a particular

More information

Automatic identification and skew estimation of text lines in real scene images

Automatic identification and skew estimation of text lines in real scene images Pattern Recognition 32 (1999) 791 810 Automatic identification and skew estimation of text lines in real scene images S. Messelodi*, C.M. Modena ITC-IRST, I-38050 Povo, Trento, Italy Received 17 November

More information

Understanding Tracking and StroMotion of Soccer Ball

Understanding Tracking and StroMotion of Soccer Ball Understanding Tracking and StroMotion of Soccer Ball Nhat H. Nguyen Master Student 205 Witherspoon Hall Charlotte, NC 28223 704 656 2021 rich.uncc@gmail.com ABSTRACT Soccer requires rapid ball movements.

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

Symmetry Based Semantic Analysis of Engineering Drawings

Symmetry Based Semantic Analysis of Engineering Drawings Symmetry Based Semantic Analysis of Engineering Drawings Thomas C. Henderson, Narong Boonsirisumpun, and Anshul Joshi University of Utah, SLC, UT, USA; tch at cs.utah.edu Abstract Engineering drawings

More information

Histograms of Oriented Gradients

Histograms of Oriented Gradients Histograms of Oriented Gradients Carlo Tomasi September 18, 2017 A useful question to ask of an image is whether it contains one or more instances of a certain object: a person, a face, a car, and so forth.

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Defining a Better Vehicle Trajectory With GMM

Defining a Better Vehicle Trajectory With GMM Santa Clara University Department of Computer Engineering COEN 281 Data Mining Professor Ming- Hwa Wang, Ph.D Winter 2016 Defining a Better Vehicle Trajectory With GMM Christiane Gregory Abe Millan Contents

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Gait analysis for person recognition using principal component analysis and support vector machines

Gait analysis for person recognition using principal component analysis and support vector machines Gait analysis for person recognition using principal component analysis and support vector machines O V Strukova 1, LV Shiripova 1 and E V Myasnikov 1 1 Samara National Research University, Moskovskoe

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

CS 4758 Robot Navigation Through Exit Sign Detection

CS 4758 Robot Navigation Through Exit Sign Detection CS 4758 Robot Navigation Through Exit Sign Detection Aaron Sarna Michael Oleske Andrew Hoelscher Abstract We designed a set of algorithms that utilize the existing corridor navigation code initially created

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

Vision-based Frontal Vehicle Detection and Tracking

Vision-based Frontal Vehicle Detection and Tracking Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga,

More information

Recognition of Gurmukhi Text from Sign Board Images Captured from Mobile Camera

Recognition of Gurmukhi Text from Sign Board Images Captured from Mobile Camera International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 17 (2014), pp. 1839-1845 International Research Publications House http://www. irphouse.com Recognition of

More information

Pedestrian Detection and Tracking in Images and Videos

Pedestrian Detection and Tracking in Images and Videos Pedestrian Detection and Tracking in Images and Videos Azar Fazel Stanford University azarf@stanford.edu Viet Vo Stanford University vtvo@stanford.edu Abstract The increase in population density and accessibility

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization Journal of Computer Science 6 (9): 1008-1013, 2010 ISSN 1549-3636 2010 Science Publications Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

Hidden Loop Recovery for Handwriting Recognition

Hidden Loop Recovery for Handwriting Recognition Hidden Loop Recovery for Handwriting Recognition David Doermann Institute of Advanced Computer Studies, University of Maryland, College Park, USA E-mail: doermann@cfar.umd.edu Nathan Intrator School of

More information

Automated Extraction of Queue Lengths from Airborne Imagery

Automated Extraction of Queue Lengths from Airborne Imagery Automated Extraction of Queue Lengths from Airborne Imagery Ashish Agrawal Department of Civil Engineering and Engineering Mechanics University of Arizona P.O. Box 210072 Tucson, AZ, 85721-0072, USA E-mail:

More information

IN computer vision develop mathematical techniques in

IN computer vision develop mathematical techniques in International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 1 Object Tracking Based On Tracking-Learning-Detection Rupali S. Chavan, Mr. S.M.Patil Abstract -In this paper; we

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

Sensor Fusion-Based Parking Assist System

Sensor Fusion-Based Parking Assist System Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:

More information

A face recognition system based on local feature analysis

A face recognition system based on local feature analysis A face recognition system based on local feature analysis Stefano Arca, Paola Campadelli, Raffaella Lanzarotti Dipartimento di Scienze dell Informazione Università degli Studi di Milano Via Comelico, 39/41

More information

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving

More information

Vehicle Detection Using Gabor Filter

Vehicle Detection Using Gabor Filter Vehicle Detection Using Gabor Filter B.Sahayapriya 1, S.Sivakumar 2 Electronics and Communication engineering, SSIET, Coimbatore, Tamilnadu, India 1, 2 ABSTACT -On road vehicle detection is the main problem

More information

Adaptive Skin Color Classifier for Face Outline Models

Adaptive Skin Color Classifier for Face Outline Models Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de

More information

Background/Foreground Detection 1

Background/Foreground Detection 1 Chapter 2 Background/Foreground Detection 1 2.1 Introduction With the acquisition of an image, the first step is to distinguish objects of interest from the background. In surveillance applications, those

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Threshold-Based Moving Object Extraction in Video Streams

Threshold-Based Moving Object Extraction in Video Streams Threshold-Based Moving Object Extraction in Video Streams Rudrika Kalsotra 1, Pawanesh Abrol 2 1,2 Department of Computer Science & I.T, University of Jammu, Jammu, Jammu & Kashmir, India-180006 Email

More information

Road-Sign Detection and Recognition Based on Support Vector Machines. Maldonado-Bascon et al. et al. Presented by Dara Nyknahad ECG 789

Road-Sign Detection and Recognition Based on Support Vector Machines. Maldonado-Bascon et al. et al. Presented by Dara Nyknahad ECG 789 Road-Sign Detection and Recognition Based on Support Vector Machines Maldonado-Bascon et al. et al. Presented by Dara Nyknahad ECG 789 Outline Introduction Support Vector Machine (SVM) Algorithm Results

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Photogrammetric mapping: introduction, applications, and tools GNSS/INS-assisted photogrammetric and LiDAR mapping LiDAR mapping: principles, applications, mathematical model, and

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology

Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Christoph Stock, Ulrich Mühlmann, Manmohan Krishna Chandraker, Axel Pinz Institute of Electrical Measurement and Measurement

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes

Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes 2009 10th International Conference on Document Analysis and Recognition Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes Alireza Alaei

More information

Real-Time Human Detection, Tracking, and Verification in Uncontrolled Camera Motion Environments

Real-Time Human Detection, Tracking, and Verification in Uncontrolled Camera Motion Environments Real-Time Human Detection, Tracking, and Verification in Uncontrolled Camera Motion Environments Mohamed Hussein Wael Abd-Almageed Yang Ran Larry Davis Institute for Advanced Computer Studies University

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment)

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment) Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment) Xiaodong Lu, Jin Yu, Yajie Li Master in Artificial Intelligence May 2004 Table of Contents 1 Introduction... 1 2 Edge-Preserving

More information

Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm

Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm ALBERTO FARO, DANIELA GIORDANO, CONCETTO SPAMPINATO Dipartimento di Ingegneria Informatica e Telecomunicazioni Facoltà

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation

Self Lane Assignment Using Smart Mobile Camera For Intelligent GPS Navigation and Traffic Interpretation For Intelligent GPS Navigation and Traffic Interpretation Tianshi Gao Stanford University tianshig@stanford.edu 1. Introduction Imagine that you are driving on the highway at 70 mph and trying to figure

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

The Population Density of Early Warning System Based On Video Image

The Population Density of Early Warning System Based On Video Image International Journal of Research in Engineering and Science (IJRES) ISSN (Online): 2320-9364, ISSN (Print): 2320-9356 Volume 4 Issue 4 ǁ April. 2016 ǁ PP.32-37 The Population Density of Early Warning

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Developing an intelligent sign inventory using image processing

Developing an intelligent sign inventory using image processing icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Developing an intelligent sign inventory using image

More information

VEHICLE DETECTION AND CLASSIFICATION FOR CLUTTERED URBAN INTERSECTION

VEHICLE DETECTION AND CLASSIFICATION FOR CLUTTERED URBAN INTERSECTION VEHICLE DETECTION AND CLASSIFICATION FOR CLUTTERED URBAN INTERSECTION Habibu Rabiu Department of Electrical Engineering, Faculty of Technology Bayero University Kano, Nigerian. hrabiu.ele@buk.edu.ng ABSTRACT

More information

A Novel Smoke Detection Method Using Support Vector Machine

A Novel Smoke Detection Method Using Support Vector Machine A Novel Smoke Detection Method Using Support Vector Machine Hidenori Maruta Information Media Center Nagasaki University, Japan 1-14 Bunkyo-machi, Nagasaki-shi Nagasaki, Japan Email: hmaruta@nagasaki-u.ac.jp

More information

Data mining with Support Vector Machine

Data mining with Support Vector Machine Data mining with Support Vector Machine Ms. Arti Patle IES, IPS Academy Indore (M.P.) artipatle@gmail.com Mr. Deepak Singh Chouhan IES, IPS Academy Indore (M.P.) deepak.schouhan@yahoo.com Abstract: Machine

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Auto-Digitizer for Fast Graph-to-Data Conversion

Auto-Digitizer for Fast Graph-to-Data Conversion Auto-Digitizer for Fast Graph-to-Data Conversion EE 368 Final Project Report, Winter 2018 Deepti Sanjay Mahajan dmahaj@stanford.edu Sarah Pao Radzihovsky sradzi13@stanford.edu Ching-Hua (Fiona) Wang chwang9@stanford.edu

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

Segmentation

Segmentation Lecture 6: Segmentation 24--4 Robin Strand Centre for Image Analysis Dept. of IT Uppsala University Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Horus: Object Orientation and Id without Additional Markers

Horus: Object Orientation and Id without Additional Markers Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky

More information

Table of Contents. Recognition of Facial Gestures... 1 Attila Fazekas

Table of Contents. Recognition of Facial Gestures... 1 Attila Fazekas Table of Contents Recognition of Facial Gestures...................................... 1 Attila Fazekas II Recognition of Facial Gestures Attila Fazekas University of Debrecen, Institute of Informatics

More information

Three-Dimensional Computer Vision

Three-Dimensional Computer Vision \bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision

More information

6.801/866. Segmentation and Line Fitting. T. Darrell

6.801/866. Segmentation and Line Fitting. T. Darrell 6.801/866 Segmentation and Line Fitting T. Darrell Segmentation and Line Fitting Gestalt grouping Background subtraction K-Means Graph cuts Hough transform Iterative fitting (Next time: Probabilistic segmentation)

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

Research of Traffic Flow Based on SVM Method. Deng-hong YIN, Jian WANG and Bo LI *

Research of Traffic Flow Based on SVM Method. Deng-hong YIN, Jian WANG and Bo LI * 2017 2nd International onference on Artificial Intelligence: Techniques and Applications (AITA 2017) ISBN: 978-1-60595-491-2 Research of Traffic Flow Based on SVM Method Deng-hong YIN, Jian WANG and Bo

More information

Small-scale objects extraction in digital images

Small-scale objects extraction in digital images 102 Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 Small-scale objects extraction in digital images V. Volkov 1,2 S. Bobylev 1 1 Radioengineering Dept., The Bonch-Bruevich State Telecommunications

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Image and Video Quality Assessment Using Neural Network and SVM

Image and Video Quality Assessment Using Neural Network and SVM TSINGHUA SCIENCE AND TECHNOLOGY ISSN 1007-0214 18/19 pp112-116 Volume 13, Number 1, February 2008 Image and Video Quality Assessment Using Neural Network and SVM DING Wenrui (), TONG Yubing (), ZHANG Qishan

More information

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation. Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way

More information

Segmentation

Segmentation Lecture 6: Segmentation 215-13-11 Filip Malmberg Centre for Image Analysis Uppsala University 2 Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion

More information

Image Matching Using Run-Length Feature

Image Matching Using Run-Length Feature Image Matching Using Run-Length Feature Yung-Kuan Chan and Chin-Chen Chang Department of Computer Science and Information Engineering National Chung Cheng University, Chiayi, Taiwan, 621, R.O.C. E-mail:{chan,

More information