DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI

Similar documents
Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Sensors (time of flight) (1)

Edge detection. Gradient-based edge operators

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Change detection using joint intensity histogram

Measurements using three-dimensional product imaging

Segmentation I: Edges and Lines

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

On Road Vehicle Detection using Shadows

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Lecture 15: Segmentation (Edge Based, Hough Transform)

Construction of a 3D city map using EPI analysis and DP matching

Constructing a 3D Object Model from Multiple Visual Features

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

OBJECT detection in general has many applications

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION

Stereo imaging ideal geometry

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

Chapter 26 Geometrical Optics

Biomedical Image Analysis. Point, Edge and Line Detection

(Refer Slide Time: 0:32)

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

Lane Markers Detection based on Consecutive Threshold Segmentation

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.

AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY

HOUGH TRANSFORM CS 6350 C V

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Mapping textures on 3D geometric model using reflectance image

Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform

Stereo Image Rectification for Simple Panoramic Image Generation

specular diffuse reflection.

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

A METHOD OF MAP MATCHING FOR PERSONAL POSITIONING SYSTEMS

Lecture 7: Most Common Edge Detectors

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Digital Image Processing COSC 6380/4393

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

DTU M.SC. - COURSE EXAM Revised Edition

Conceptual Physics Fundamentals

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Conceptual Physics 11 th Edition

Study on Gear Chamfering Method based on Vision Measurement

Edge and local feature detection - 2. Importance of edge detection in computer vision

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Tracking Trajectories of Migrating Birds Around a Skyscraper

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Light and the Properties of Reflection & Refraction

Lecture 9: Hough Transform and Thresholding base Segmentation

Filtering Images. Contents

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

ACTIVITY TWO CONSTANT VELOCITY IN TWO DIRECTIONS

A Survey of Light Source Detection Methods

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

CITS 4402 Computer Vision

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Flexible Calibration of a Portable Structured Light System through Surface Plane

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

Detection and Classification of Painted Road Objects for Intersection Assistance Applications

Experiments with Edge Detection using One-dimensional Surface Fitting

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

DEVELOPMENT OF THE IMAGE PROCESSING VEHICLE DETECTOR FOR INTERSECTIONS

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

The Ray model of Light. Reflection. Class 18

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Answers to practice questions for Midterm 1

Sensor Modalities. Sensor modality: Different modalities:

Robot vision review. Martin Jagersand

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Reflection & Mirrors

Fundamental Technologies Driving the Evolution of Autonomous Driving

Experiment 8 Wave Optics

Applications of Piezo Actuators for Space Instrument Optical Alignment

Spatio temporal Segmentation using Laserscanner and Video Sequences

EN1610 Image Understanding Lab # 3: Edges

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Absolute Scale Structure from Motion Using a Refractive Plate

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Ground Plane Motion Parameter Estimation For Non Circular Paths

A Symmetry Operator and Its Application to the RoboCup

What is Frequency Domain Analysis?

Hypothesis Generation of Instances of Road Signs in Color Imagery Captured by Mobile Mapping Systems

Epipolar geometry contd.

Transcription:

DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI Institute of Industrial Science, University of Tokyo 4-6-1 Komaba, Meguro-ku, Tokyo, 153-8505, Japan Phone: +81-3-5452-6242, Fax: +81-3-5452-6244 E-mail: hirahara@iis.u-tokyo.ac.jp ABSTRACT Vehicles parked on streets make traffic problems in urban areas worse. We propose a meod to detect ose street-parking vehicles, using epipolar plane images (EPIs) acquired by a line scan camera, for traffic census. Our meod is based on calculating e distance between e vehicles and e camera from e slope of e feature pas in an EPI. The feature pas are extracted in an EPI by e Hough transformation. The experimental results show at our proposed meod can detect vehicles almost at 70 % rate of detection. INTRODUCTION In crowded urban areas, vehicles parked on streets occupy a certain area of e streets at any time and cause traffic problems, such as slow-moving traffic. It is very important for traffic census to obtain e number of ose vehicles and e rate of eir occupied area, occupancy. At present traffic census, measurement of ose actual conditions is performed manually. Especially, for actual traffic condition survey in Japan, street-parking vehicles have been manually counted by investigators taking measuring vehicles. We are interested in automatic measurement of e number of ose vehicles and occupancy of streets by ose ones. Related work was performed by Ono (1), et al. They used a scanning laser sensor and he got e distance to target street-parking vehicles directly, however, ey don t use image sensors. This paper describes a meod to detect street-parking vehicles from epipolar plane images (EPIs), which are easily obtained by a line scan camera. EPI analysis, firstly developed by Bolles (2), is a technique for building a ree-dimensional description of a static scene from a dense sequence of images. EPI analysis is used to calculate e features dep from e slope of feature pas in EPIs. Fundamental ideas, features of side of vehicles, our proposed detection meod, and outdoor experimental results are addressed in e rest of is paper. EPIPOLAR PLANE IMAGE ANALYSIS A general stereo configuration wi two cameras, as shown in Figure 1, consists of two lens centers, eir respective image planes and a point P in e scene. The plane on which ere are each point P in e scene and two lens centers is called e epipolar plane. Each epipolar plane intersects e image plane along e line, called e - 1 -

epipolar line. The line joining e two lens centers intersects an image plane at e point which is called an epipole. An epipolar plane has several epipolar lines on eir respective image planes. This sequence of epipolar lines constructs an image, called epipolar-plane image. The points on an epipolar plane are projected onto an epipolar line on each image plane. An EPI contains all e information of e features in e epipolar plane, at is, a slice of e scene. Image Plane 1 Image Plane 2 P Epipolar Plane Epipolar Line 2 Epipolar Line 1 Epipole 1 Lens Center 2 Epipole 2 Lens Center 1 Figure 1 A general stereo configuration wi two cameras modeled as pinholes. In e same way of two-camera stereo, an epipolar-plane image is also obtained from a sequence of images when a camera s lens center moves in a straight line. When a camera moves in a straight line, pointed perpendicularly to its trajectory, e epipolar plane is e plane on which ere are each point P of e scene and e lens center s trajectory, in e left side of Figure 2. This camera s motion is named a lateral motion. Epipolar lines on ese image planes for a lateral motion are collinear and parallel to e lens-center s trajectory. These epipolar lines construct an epipolar-plane image, in e right side of Figure 2. A point P of e scene, called (scene-) feature point, draws a pa in e epipolar-plane image, referred as feature pa. When a camera s velocity in its lateral motion is constant, all feature pas are straight in e epipolar-plane image. P t 1 P 1 Epipolar line at t 1 P 3 P 2 t 2 P 2 Epipolar line at t 2 P 1 t 3 Epipolar line t 2 Time t 3 P 3 Epipolar line at t 3 Trajectory of a lens center t 1 Figure 2 An epipolar-plane image for a camera s lateral motion: a camera s lateral motion and an epipolar line on its image plane at each time (left), e epipolar-plane image contained of a sequence of epipolar lines (right). Time - 2 -

For a period t = t2 t1, e projected point of e feature point P onto e image plane moves from P 1 to P 2, at a distance of u between P 1 and P 2 in e image plane. The image plane and e feature point P are at a distance h and D from e lens-center s trajectory respectively. It is assumed at e velocity V of e lens center is constant. The proportion of D to h is same as e proportion of V t to u, so at e relationship between e dep D of e feature point P from e camera s trajectory and e slope m= t/ u of e feature pa in e epipolar-plane image is derived as e following equation, V t D = h = h V m. u FUNDAMENTAL IDEAS An epipolar line for a lateral motion, whereby e camera is aimed perpendicularly to its pa, is a scanning-line image on a conventional two-dimensional image plane, so at it is easy to construct EPIs. Furermore, by using a line scan camera wi higher frame rate an a normal camera s frame rate of 30 Hz, EPIs consisting of more dense scanning-lines are obtained, and feature pas in EPIs are smooly continuous. Feature pas for stationary features in e scene are straight lines in EPIs, when e velocity of a camera is constant. Those feature pas are easy to be detected by image processing, so e slope of ose straight lines can be calculated. We mount a line scan camera on a measuring vehicle to look sideways, horizontally and at appropriate height. We assume at e measuring vehicle moves along parallel wi a row of vehicles parked on streets, e camera can take many scans on eir side to obtain EPIs. It is true at we have applied a line scan camera to obtain EPIs, but we can also obtain EPIs on a usual camera if its frame rate is fast enough. FEATURES ON SIDE OF VEHICLES A line scan camera should be set up to cut horizontally street-parking vehicles in e scene, so at e cutting plane is an epipolar plane including features of em. In oer words, some of feature pas in e obtained EPI mean feature points of street-parking vehicles. Its height decides what features of street-parking vehicles appear on an EPI. At actual measurement, e camera gets various kind of vibration rough a vehicle s body, such as engine vibration and sway of e vehicle, and so it is important to choose height of e camera so at certain vehicle s features can always appear on e EPI caught by e camera under vibration. In order to examine how features appear on side of vehicles, 2-D intensity images of saloon-type cars are binarized wi 1's where edges are found and 0's elsewhere by e Canny meod, and en we compare features appearing on ree horizontal straight lines wi different height, (a), (b), and (c) in Figure 3. Binary features on e higher horizontal lines (a) are located partly on a background rough e windows of e vehicles. Those features have e dep different from e vehicle body s one, and one vehicle in an EPI consists of some divided regions between which ere are regions of e background. In consequence, it is undesirable for e dep-from-slope meod used in is paper. - 3 -

Binary features on e lower horizontal lines (c) lie partly on e vehicle s wheels whose shapes are round and are typical features among all types of vehicles. They, however, might appear intermittently in an EPI because e camera s height changes under vibration rough e vehicle s body in e real measurement. Such features as wheels would susceptible to vibration and not be suitable for applying an EPI analysis, while features like vertical lines of vehicles appear robustly in an EPI. Binary features on e middle horizontal lines (b) consist of e intersections wi e following two features: one is e perpendicular feature lines on e side of a vehicle and e oer is e boundaries of e vehicles against e backgrounds. The perpendicular feature lines remain nearly unaffected by vibration. When ey come from e specular surface of e body, it turns out at eir appearance depends on various external factors, such as illuminating conditions, e color of e vehicle, and reflection of eir surroundings, by comparison between e left and right images in Figure 3. In e left image vertical border lines of e door show emselves clearly, while in e right image ey are difficult to determine. In conclusion, e boundaries of e vehicles on such a middle horizontal line as (b), which is perpendicular, produce stable and desirable feature pas in an EPI. For saloon-type cars, e best camera height was determined rough e above-mentioned verification. Wide variety of vehicles are parked in streets, us only camera height can not be determined to be suitable for sensing in common wi all-type vehicles. In is paper, only saloon-type vehicles are taken account of, however, ey are typical and have a certain amount of propriety, since e derived camera height might be effective for anoer-type vehicles. (a) (b) (c) (a) (b) (c) Figure 3 Photographs (upper) and binary images (lower) of side of saloon-type vehicles and ree different high lines. (These two vehicles form a row in e same location at e same time. The left and right vehicles are parked in e shade and sun, respectively.) GENERATION OF EPIPOLAR PLANE IMAGE Whole measuring system consists of a line scan camera, an image capturing board - 4 -

and a computer. The line scan camera is SP-14-02k30 manufactured by DALSA, which has 2048 pixels and has a line frequency wi 0.3 to 14 khz. The image capturing board is IPM-8540D manufactured by GRAPHIN, and e computer is a PC-AT computer. Figure 4 shows e line scan camera mounted on e measuring vehicles. It will be best if e camera height for saloon-type vehicles is about 700 mm from e above-mentioned discussion, and e camera is set up at about 700 mm height and horizontally in e measuring experiments. Figure 4 A line scan camera mounted on e measuring vehicle: in e left image e vehicle has anoer conventional camera on its roof, and in e right image e line scan camera is zoomed up. The measuring vehicle properly mounted e line scan camera goes pass a street-parking vehicle as a measuring target, straight along e neighbor lane as Figure 5 illustrates. This motion is a lateral motion or what, for acquirement of 3-D information from e surrounding environment. An EPI is expected to be obtained as indicated in e right of Figure 5. Feature points in area (a), (b), (c) and (d) of e target vehicle in e left of Figure 5 draws feature pas (a), (b), (c) and (d) in e EPI in e right of Figure 5, respectively. The feature pas (a) and (d) have e same slope and e feature pas (b) and (c) have e same slope, ough two slopes are not same and proportional to dep to e corresponding feature points. (a) a measuring vehicle (b) (c) (d) (b) (a) a street-parking vehicle (c) a field of a camera a sidewalk time (d) Figure 5 Measuring situation and expected EPI: each feature point draws e corresponding feature pa in EPI. An EPI is given in Figure 6, as an experimental result targeting at a street-parking - 5 -

vehicle along Route 246 in Tokyo. The target vehicle is a white minivan as shown in two upper photographs of Figure 6. The EPI has four feature pas (a), (b), (c) and (d) which mean e boundaries of e minivan as mentioned above. In addition, a region between feature pas (b) and (c) means e side body of e minivan. Its side has two perpendicular boundaries of its side door which draw two perpendicular feature pas in e EPI. Feature pas (b) and (c) are derived from feature points which are hiermost from e lens center, while e oer feature pas (a) and (d) are derived from feature points which are most far from e lens center. The difference between two deps, which means e wid of e minivan, can be calculated and informs us of e existence of e minivan. (a) (b) (c) (b) (c) (d) Time Figure 6 A target street-parking vehicle (upper) and an EPI (lower) whose size 2048x16383 pixels, e velocity is 10 km/h IMAGE PROCESSING PROCEDURE Images obtained by e line scan camera are 8 bit gray-scale images. By using image processing techniques, we examine change of e distance to objects coming up straight in front of e lens center. First, we divide e whole image into segmentations by every a few hundreds pixels in e direction of time and analyze every image segmentation in order of time. By e Canny meod, ese images are differentiated and binarized to detect edges. In order to identify straight lines in e binary images, e Hough transformation is applied and e most remarkable straight line is detected. The slope of ose feature pas is calculated and plotted against time, and en we determine change of e dep from e lens center. The whole procedure consists of e following ree steps depicted in Figure 7; Image Split, Edge Detection and Line Extraction. 1. Image Split The EPI is split by every several pixels in e direction of time, in order to find - 6 -

feature pas coming up in e EPI in chronological order. In is splitting process, ere is no duplication and no gap occurring among all split small images. 2. Edge Detection So as to find edges in each split image, we use e Canny meod, which finds edges by looking for local maxima of e gradient of each image. The gradient is calculated using e derivative of e Gaussian filter. The meod uses two resholds; e low reshold is 0.0125 and e high reshold is 0.0313 in Figure 6. The standard deviation of e Gaussian filter σ is 1. Each split image results in a binary image wi 1's where edges are found and 0's elsewhere by e Canny meod. 3. Line Extraction Straight lines lying concealed in each split binary image could be detected using e Hough transformation which is implemented in e Radon function applied to e binary image. The locations of strong peaks in e Radon transform matrix correspond to e locations of straight lines in e original image. A straight line in each split image is represented by x cos θ + y sin θ + ρ = 0, where e origin of e coordinate system is e center of e image, θ is e normal angle of is line wi reference to e x-axis and ρ is e distance of is line from e origin. The range of θ is 1 to 180 degrees and e range of ρ is wiin plus or minus a half leng of a diagonal line of e split image. In e Hough transformation, quantization for an angle θ is at 1 degree. The strongest peak is only made a search for and selected, and en e corresponding straight line could be regarded as e most remarkable feature pa in each split binary image. Finally, e slope of e selected feature pa means approximately e dep of e corresponding feature point from e lens center at e moment e corresponding image was taken. Binary Image Hough Transformation Edge Detection Parameter Space -1000 110-800 100-600 90 80 rho (pixels from center) -400 Image Split 70-200 60 0 50 peak 200 40 400 30 600 20 800 10 1000 20 40 60 80 100 eta (deg) 120 140 160 180 Paremeter Space Line Extraction Time EPI Split Images θ ρ The Strongest Line Figure 7 Image processing diagram -7-

There are some points of concern in e above image processing procedure, and here ey are explained. By how many pixels it is best to split an EPI depends on e measuring conditions mainly as e velocity of a measuring vehicle and e line frequency of a line scan camera. We examine dep curves generated by ree splitting sizes, 100, 200 and 300 pixels, as shown in Figure 8. In e case at e line frequency is 7 khz and e velocity of a measuring vehicle is about 20 km/h, e best splitting size is 300 pixels in consequence. Theta curve in splitting EPI by 100 pixels (DataNo.16) 180 Theta curve in splitting EPI by 200 pixels (DataNo.16) 180 Theta curve in splitting EPI by 300 pixels (DataNo.16) 180 Theta (Degree of normal lines) 170 160 150 140 130 120 110 100 Theta (Degree of normal lines) 170 160 150 140 130 120 110 100 Theta (Degree of normal lines) 170 160 150 140 130 120 110 100 90 0 20 40 60 80 100 120 140 160 Time (x100pixels) 90 0 10 20 30 40 50 60 70 80 Time (x200pixels) 90 0 5 10 15 20 25 30 35 40 45 50 55 Time (x300pixels) Figure 8 Dep curves for each split size; 100, 200, 300 pixels in order from left to right If ere is no splitting e EPI, processing by e Hough transformation at once such a large-size image as e whole image in Figure 6 finds an infinite number of feature pas. It is extremely difficult, in fact impossible, to arrange ose infinite number of feature pas in order of appearance. Since every split image is extremely wider from side to side an is long from top to down, crosswise lines consist of much more pixels projected on e Hough parameter space (equals to e Radon function matrix) an longitudinal lines. If noing is done, crosswise lines, as against longitudinal lines, tend to be transformed to local peaks in e Hough parameter space and always tend to be selected as e most remarkable lines in e original image. In order to avoid is problem derived from e irregular shape of e original image, e original image is transformed onto e Hough parameter space as e number of projected pixels respect to leng of each projection line by using e Radon function. DEPTH-FROM-SLOPE After e image processing procedure mentioned above, e slope of feature pas in EPIs informs us e dep to e corresponding feature points in e scene. In is paper, we use e slope of e projective line from 1 to 180 degrees in stead of e dep between feature points and e lens center. However, calculation of dep from slope is important and it is described here. For e slope of e feature pa m, e distance D between e corresponding feature point and e lens center of e camera can be calculated by e following equation. v m p D =, θ 2 f tan 2 where v (m/s) is e velocity of e measuring vehicle, m is e slope of e feature pa, p (pixels) is e number of pixels of a line scan camera, f (Hz) is e - 8 -

frequency of scanning a line, and θ (degrees) is e angle of field of e camera. 180 Theta curve in splitting EPI by 300 pixels (DataNo.16) 160 reshold Theta (Degree of normal lines) 140 120 100 80 60 40 20 0 0 5 10 15 20 25 30 35 40 45 50 55 Time (x300pixels) Figure 9 Slope of normal lines of e feature pas plotted against time wi split size 300 pixels. ESTIMATION OF THRESHOLD A reshold value for a dep curve is used in order to identify a street-parking vehicle. The reshold value is subject to e distance between a lens center and e side of a street-parking vehicle. To cite a case is distance between em is 2 meters, it is advisable to set e reshold value to about 2 meters adding half a wid of a vehicle. However, e distance between e lens center and a street-parking vehicle is not constant and can not be known always in various actual situations. It is almost impossible to know how long is e wid of each lane on e roads measuring, what lane e measuring vehicle travels along, and where exactly on e lane it travels. Thus, considering such a normal situation as Route 246 at Tokyo, a reshold value could be estimated as e following equation. Assuming at e measuring vehicle travel rough e center of e middle lane and a street-parking vehicle stop at e side of e left lane, Wmiddle Wprobe D W left + W common +, 2 2 where W left (m) is e wid of e most left lane on which ere is a street-parking vehicle, W middle (m) is e wid of e middle lane along which e measuring vehicle travels, W common (m) is e wid of e street-parking vehicle, and W probe (m) is e wid of e measuring vehicle in Figure 10. Actually Route 246 has W left wi 3 meters and W middle wi 3.2 meters. A street-parking vehicle is regarded as one of saloon-type vehicles wi W common as 1.8 meters, and e wid of our measuring vehicle,, is 1.72 meters. As a result, e estimated reshold value is calculated as W probe - 9 -

D = 1.94 (m). a street-parking vehicle W common Wprobe W probe W common W W middle left Figure 10 A traffic scene of Route 246 at Tokyo and wids of two lanes and vehicles. W left W middle a sidewalk e measuring vehicle Slope of a feature pa in an epipolar-plane image is in proportion to dep from a feature point. Corresponding to e estimated reshold value D, e angle φ of a normal line against a feature pa is equal to calculated as below. 180 tan 1 m 90 + (degrees), and is π φ 180 tan 1 m 90 = + π θ 2 D tan 180 f 1 tan 2 = + 90 π V p, where 0 < φ 180 degrees. On Route 246, e estimated reshold values φ respect to velocity of e measuring vehicle are summarized in e following table, where p = 2048 (pixels), f = 1.8 and 7 (khz), and θ = 62 (degrees). Table 1 Estimated reshold value φ wi respect to e measurement velocity. Velocity V (km/h) / (m/s) Estimated reshold value φ (degree) f = 1.8 (khz) f = 7 (khz) 10 (km/h) / 2.78 (m/s) 126 161 20 (km/h) / 5.56 (m/s) 110 145 30 (km/h) / 8.33 (m/s) 103 134 40 (km/h) / 11.1 (m/s) 100 125-10 -

OUTDOOR EXPERIMENTAL RESULTS Our developed measuring vehicle measures vehicles parked on Route 246 in Tokyo, Japan. As a result of e measurement, an EPI as Figure 6 is obtained by a line scan camera, DALSA SP-14-02k30. Figure 6 shows at e vehicle parked on e street appears as ree light-gray belts in e middle of e image. The height of e camera is 900 mm, e velocity of e probe vehicle is 2.78 m/s (10km/h), e number of pixels of a line scan camera is 2048 pixels, e frequency of scanning a line is 7 khz, and e angle of field of e camera is 62 degrees. After e image processing, Figure 9 provides a plot of e slope of e feature pas m + 90 degree against time. There are e changes of e slope corresponding to e vehicle parked on e street in Figure 9. The distance D between e side surface of e street-parking vehicle and e measuring vehicle is calculated to be 1 meter for m = 145 90 = 55 degrees by e equation. When e measuring vehicle goes straight at 4-level of velocities; 10 km/h, 20 km/h, 30 km/h and 40 km/h, e rate of detection of street-parking vehicles is summarized in Table 2. In is summary, e estimated reshold values are used and e rate of detection is calculated as e number of vehicles detected by is system respect to e sum of e number of vehicles parked actually and e number of vehicles detected wrongly. Relationship between velocity of e measuring vehicle and e rate of detection could not be found. The entire rate of detection is 71 %. Table 2 Rate of detection wi respect to 4-levels of velocities Velocity of e measuring vehicle (km/h) Rate of detection of street-parking vehicles (%) 10 66 20 100 30 75 40 60 Distortion occurs from e specular reflection on e side of target vehicles, e reflection of e surroundings onto e side surface of target vehicles, and e special structure of target vehicles. The specular reflection inputs a strong light beam into e lens, and en obtained images are over a range of a CCD s sensitivity. The reflection of e surroundings causes e remarkable feature pas in EPIs. In e image processing, ey are detected wrongly. In results, wrong dep is calculated. Trucks have different body structure an common vehicles. Trailer and tractor could be detected separately to be two vehicles. CONCLUSIONS We proposed e meod of detecting street-parking vehicles using a line scan camera. By using e line scan camera, we obtain directly EPIs including features of target vehicles. Our detection meod is based on e EPI analysis wi a lateral motion of e camera. Feature pas are detected in e EPI by using e Hough transformation and eir slopes in e EPIs are proportional e distance between e lens center and e corresponding feature points in e scene. - 11 -

Threshold values are utilized for detection of street-parking vehicles. Threshold values depend on e road wids, velocity of e measuring vehicle, a camera s scanning frequency, and so on. Then, we introduced e estimated reshold values and examined to apply em to e measuring data. Using e estimated reshold values, our detection rate was 71%. The outdoor experiments show at our meod is available and effective for detecting ose vehicles in e real environment. Illumination conditions cause distortion; for example, specular reflection because e vehicles body have mirror surfaces, and reflection of e surroundings onto e body. Furermore, distortion also occurs from special structure of target vehicles, e.g. trailer and tractor. FUTURE WORKS In is paper, a line of CCD consists of 2048 pixels, however, feature points straight in front of a lens center are needed, and en only a few hundred pixels in e middle of e line are acquired and processed in e image analysis. It is enough to detect street-parking vehicles. A laser sensor can measure e distance directly and robustly in e outdoor environment. However, an image sensor like a camera has much more information from data. Thus, we will fuse an image sensor and a laser sensor, and improve e whole system. In addition, since EPIs can be obtained by a usual camera, our results also showed e possibility at a high frame-rate 2-D image sensor can detect street-parking vehicles. REFERENCES (1) S. Ono, et al., Parking-Vehicle Detection System by using Laser Range Sensor Mounted on a Probe Car, Proc. of Intelligent Vehicle Symposium (IV 2002), Poster Session 2, 2002. (2) R. C. Bolles, et al., Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion, International Journal of Compute Vision, vol.1, pp.7-55, 1987. - 12 -