DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI

Size: px
Start display at page:

Download "DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI"

Transcription

1 DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI Institute of Industrial Science, University of Tokyo Komaba, Meguro-ku, Tokyo, , Japan Phone: , Fax: ABSTRACT Vehicles parked on streets make traffic problems in urban areas worse. We propose a meod to detect ose street-parking vehicles, using epipolar plane images (EPIs) acquired by a line scan camera, for traffic census. Our meod is based on calculating e distance between e vehicles and e camera from e slope of e feature pas in an EPI. The feature pas are extracted in an EPI by e Hough transformation. The experimental results show at our proposed meod can detect vehicles almost at 70 % rate of detection. INTRODUCTION In crowded urban areas, vehicles parked on streets occupy a certain area of e streets at any time and cause traffic problems, such as slow-moving traffic. It is very important for traffic census to obtain e number of ose vehicles and e rate of eir occupied area, occupancy. At present traffic census, measurement of ose actual conditions is performed manually. Especially, for actual traffic condition survey in Japan, street-parking vehicles have been manually counted by investigators taking measuring vehicles. We are interested in automatic measurement of e number of ose vehicles and occupancy of streets by ose ones. Related work was performed by Ono (1), et al. They used a scanning laser sensor and he got e distance to target street-parking vehicles directly, however, ey don t use image sensors. This paper describes a meod to detect street-parking vehicles from epipolar plane images (EPIs), which are easily obtained by a line scan camera. EPI analysis, firstly developed by Bolles (2), is a technique for building a ree-dimensional description of a static scene from a dense sequence of images. EPI analysis is used to calculate e features dep from e slope of feature pas in EPIs. Fundamental ideas, features of side of vehicles, our proposed detection meod, and outdoor experimental results are addressed in e rest of is paper. EPIPOLAR PLANE IMAGE ANALYSIS A general stereo configuration wi two cameras, as shown in Figure 1, consists of two lens centers, eir respective image planes and a point P in e scene. The plane on which ere are each point P in e scene and two lens centers is called e epipolar plane. Each epipolar plane intersects e image plane along e line, called e - 1 -

2 epipolar line. The line joining e two lens centers intersects an image plane at e point which is called an epipole. An epipolar plane has several epipolar lines on eir respective image planes. This sequence of epipolar lines constructs an image, called epipolar-plane image. The points on an epipolar plane are projected onto an epipolar line on each image plane. An EPI contains all e information of e features in e epipolar plane, at is, a slice of e scene. Image Plane 1 Image Plane 2 P Epipolar Plane Epipolar Line 2 Epipolar Line 1 Epipole 1 Lens Center 2 Epipole 2 Lens Center 1 Figure 1 A general stereo configuration wi two cameras modeled as pinholes. In e same way of two-camera stereo, an epipolar-plane image is also obtained from a sequence of images when a camera s lens center moves in a straight line. When a camera moves in a straight line, pointed perpendicularly to its trajectory, e epipolar plane is e plane on which ere are each point P of e scene and e lens center s trajectory, in e left side of Figure 2. This camera s motion is named a lateral motion. Epipolar lines on ese image planes for a lateral motion are collinear and parallel to e lens-center s trajectory. These epipolar lines construct an epipolar-plane image, in e right side of Figure 2. A point P of e scene, called (scene-) feature point, draws a pa in e epipolar-plane image, referred as feature pa. When a camera s velocity in its lateral motion is constant, all feature pas are straight in e epipolar-plane image. P t 1 P 1 Epipolar line at t 1 P 3 P 2 t 2 P 2 Epipolar line at t 2 P 1 t 3 Epipolar line t 2 Time t 3 P 3 Epipolar line at t 3 Trajectory of a lens center t 1 Figure 2 An epipolar-plane image for a camera s lateral motion: a camera s lateral motion and an epipolar line on its image plane at each time (left), e epipolar-plane image contained of a sequence of epipolar lines (right). Time - 2 -

3 For a period t = t2 t1, e projected point of e feature point P onto e image plane moves from P 1 to P 2, at a distance of u between P 1 and P 2 in e image plane. The image plane and e feature point P are at a distance h and D from e lens-center s trajectory respectively. It is assumed at e velocity V of e lens center is constant. The proportion of D to h is same as e proportion of V t to u, so at e relationship between e dep D of e feature point P from e camera s trajectory and e slope m= t/ u of e feature pa in e epipolar-plane image is derived as e following equation, V t D = h = h V m. u FUNDAMENTAL IDEAS An epipolar line for a lateral motion, whereby e camera is aimed perpendicularly to its pa, is a scanning-line image on a conventional two-dimensional image plane, so at it is easy to construct EPIs. Furermore, by using a line scan camera wi higher frame rate an a normal camera s frame rate of 30 Hz, EPIs consisting of more dense scanning-lines are obtained, and feature pas in EPIs are smooly continuous. Feature pas for stationary features in e scene are straight lines in EPIs, when e velocity of a camera is constant. Those feature pas are easy to be detected by image processing, so e slope of ose straight lines can be calculated. We mount a line scan camera on a measuring vehicle to look sideways, horizontally and at appropriate height. We assume at e measuring vehicle moves along parallel wi a row of vehicles parked on streets, e camera can take many scans on eir side to obtain EPIs. It is true at we have applied a line scan camera to obtain EPIs, but we can also obtain EPIs on a usual camera if its frame rate is fast enough. FEATURES ON SIDE OF VEHICLES A line scan camera should be set up to cut horizontally street-parking vehicles in e scene, so at e cutting plane is an epipolar plane including features of em. In oer words, some of feature pas in e obtained EPI mean feature points of street-parking vehicles. Its height decides what features of street-parking vehicles appear on an EPI. At actual measurement, e camera gets various kind of vibration rough a vehicle s body, such as engine vibration and sway of e vehicle, and so it is important to choose height of e camera so at certain vehicle s features can always appear on e EPI caught by e camera under vibration. In order to examine how features appear on side of vehicles, 2-D intensity images of saloon-type cars are binarized wi 1's where edges are found and 0's elsewhere by e Canny meod, and en we compare features appearing on ree horizontal straight lines wi different height, (a), (b), and (c) in Figure 3. Binary features on e higher horizontal lines (a) are located partly on a background rough e windows of e vehicles. Those features have e dep different from e vehicle body s one, and one vehicle in an EPI consists of some divided regions between which ere are regions of e background. In consequence, it is undesirable for e dep-from-slope meod used in is paper

4 Binary features on e lower horizontal lines (c) lie partly on e vehicle s wheels whose shapes are round and are typical features among all types of vehicles. They, however, might appear intermittently in an EPI because e camera s height changes under vibration rough e vehicle s body in e real measurement. Such features as wheels would susceptible to vibration and not be suitable for applying an EPI analysis, while features like vertical lines of vehicles appear robustly in an EPI. Binary features on e middle horizontal lines (b) consist of e intersections wi e following two features: one is e perpendicular feature lines on e side of a vehicle and e oer is e boundaries of e vehicles against e backgrounds. The perpendicular feature lines remain nearly unaffected by vibration. When ey come from e specular surface of e body, it turns out at eir appearance depends on various external factors, such as illuminating conditions, e color of e vehicle, and reflection of eir surroundings, by comparison between e left and right images in Figure 3. In e left image vertical border lines of e door show emselves clearly, while in e right image ey are difficult to determine. In conclusion, e boundaries of e vehicles on such a middle horizontal line as (b), which is perpendicular, produce stable and desirable feature pas in an EPI. For saloon-type cars, e best camera height was determined rough e above-mentioned verification. Wide variety of vehicles are parked in streets, us only camera height can not be determined to be suitable for sensing in common wi all-type vehicles. In is paper, only saloon-type vehicles are taken account of, however, ey are typical and have a certain amount of propriety, since e derived camera height might be effective for anoer-type vehicles. (a) (b) (c) (a) (b) (c) Figure 3 Photographs (upper) and binary images (lower) of side of saloon-type vehicles and ree different high lines. (These two vehicles form a row in e same location at e same time. The left and right vehicles are parked in e shade and sun, respectively.) GENERATION OF EPIPOLAR PLANE IMAGE Whole measuring system consists of a line scan camera, an image capturing board - 4 -

5 and a computer. The line scan camera is SP-14-02k30 manufactured by DALSA, which has 2048 pixels and has a line frequency wi 0.3 to 14 khz. The image capturing board is IPM-8540D manufactured by GRAPHIN, and e computer is a PC-AT computer. Figure 4 shows e line scan camera mounted on e measuring vehicles. It will be best if e camera height for saloon-type vehicles is about 700 mm from e above-mentioned discussion, and e camera is set up at about 700 mm height and horizontally in e measuring experiments. Figure 4 A line scan camera mounted on e measuring vehicle: in e left image e vehicle has anoer conventional camera on its roof, and in e right image e line scan camera is zoomed up. The measuring vehicle properly mounted e line scan camera goes pass a street-parking vehicle as a measuring target, straight along e neighbor lane as Figure 5 illustrates. This motion is a lateral motion or what, for acquirement of 3-D information from e surrounding environment. An EPI is expected to be obtained as indicated in e right of Figure 5. Feature points in area (a), (b), (c) and (d) of e target vehicle in e left of Figure 5 draws feature pas (a), (b), (c) and (d) in e EPI in e right of Figure 5, respectively. The feature pas (a) and (d) have e same slope and e feature pas (b) and (c) have e same slope, ough two slopes are not same and proportional to dep to e corresponding feature points. (a) a measuring vehicle (b) (c) (d) (b) (a) a street-parking vehicle (c) a field of a camera a sidewalk time (d) Figure 5 Measuring situation and expected EPI: each feature point draws e corresponding feature pa in EPI. An EPI is given in Figure 6, as an experimental result targeting at a street-parking - 5 -

6 vehicle along Route 246 in Tokyo. The target vehicle is a white minivan as shown in two upper photographs of Figure 6. The EPI has four feature pas (a), (b), (c) and (d) which mean e boundaries of e minivan as mentioned above. In addition, a region between feature pas (b) and (c) means e side body of e minivan. Its side has two perpendicular boundaries of its side door which draw two perpendicular feature pas in e EPI. Feature pas (b) and (c) are derived from feature points which are hiermost from e lens center, while e oer feature pas (a) and (d) are derived from feature points which are most far from e lens center. The difference between two deps, which means e wid of e minivan, can be calculated and informs us of e existence of e minivan. (a) (b) (c) (b) (c) (d) Time Figure 6 A target street-parking vehicle (upper) and an EPI (lower) whose size 2048x16383 pixels, e velocity is 10 km/h IMAGE PROCESSING PROCEDURE Images obtained by e line scan camera are 8 bit gray-scale images. By using image processing techniques, we examine change of e distance to objects coming up straight in front of e lens center. First, we divide e whole image into segmentations by every a few hundreds pixels in e direction of time and analyze every image segmentation in order of time. By e Canny meod, ese images are differentiated and binarized to detect edges. In order to identify straight lines in e binary images, e Hough transformation is applied and e most remarkable straight line is detected. The slope of ose feature pas is calculated and plotted against time, and en we determine change of e dep from e lens center. The whole procedure consists of e following ree steps depicted in Figure 7; Image Split, Edge Detection and Line Extraction. 1. Image Split The EPI is split by every several pixels in e direction of time, in order to find - 6 -

7 feature pas coming up in e EPI in chronological order. In is splitting process, ere is no duplication and no gap occurring among all split small images. 2. Edge Detection So as to find edges in each split image, we use e Canny meod, which finds edges by looking for local maxima of e gradient of each image. The gradient is calculated using e derivative of e Gaussian filter. The meod uses two resholds; e low reshold is and e high reshold is in Figure 6. The standard deviation of e Gaussian filter σ is 1. Each split image results in a binary image wi 1's where edges are found and 0's elsewhere by e Canny meod. 3. Line Extraction Straight lines lying concealed in each split binary image could be detected using e Hough transformation which is implemented in e Radon function applied to e binary image. The locations of strong peaks in e Radon transform matrix correspond to e locations of straight lines in e original image. A straight line in each split image is represented by x cos θ + y sin θ + ρ = 0, where e origin of e coordinate system is e center of e image, θ is e normal angle of is line wi reference to e x-axis and ρ is e distance of is line from e origin. The range of θ is 1 to 180 degrees and e range of ρ is wiin plus or minus a half leng of a diagonal line of e split image. In e Hough transformation, quantization for an angle θ is at 1 degree. The strongest peak is only made a search for and selected, and en e corresponding straight line could be regarded as e most remarkable feature pa in each split binary image. Finally, e slope of e selected feature pa means approximately e dep of e corresponding feature point from e lens center at e moment e corresponding image was taken. Binary Image Hough Transformation Edge Detection Parameter Space rho (pixels from center) -400 Image Split peak eta (deg) Paremeter Space Line Extraction Time EPI Split Images θ ρ The Strongest Line Figure 7 Image processing diagram -7-

8 There are some points of concern in e above image processing procedure, and here ey are explained. By how many pixels it is best to split an EPI depends on e measuring conditions mainly as e velocity of a measuring vehicle and e line frequency of a line scan camera. We examine dep curves generated by ree splitting sizes, 100, 200 and 300 pixels, as shown in Figure 8. In e case at e line frequency is 7 khz and e velocity of a measuring vehicle is about 20 km/h, e best splitting size is 300 pixels in consequence. Theta curve in splitting EPI by 100 pixels (DataNo.16) 180 Theta curve in splitting EPI by 200 pixels (DataNo.16) 180 Theta curve in splitting EPI by 300 pixels (DataNo.16) 180 Theta (Degree of normal lines) Theta (Degree of normal lines) Theta (Degree of normal lines) Time (x100pixels) Time (x200pixels) Time (x300pixels) Figure 8 Dep curves for each split size; 100, 200, 300 pixels in order from left to right If ere is no splitting e EPI, processing by e Hough transformation at once such a large-size image as e whole image in Figure 6 finds an infinite number of feature pas. It is extremely difficult, in fact impossible, to arrange ose infinite number of feature pas in order of appearance. Since every split image is extremely wider from side to side an is long from top to down, crosswise lines consist of much more pixels projected on e Hough parameter space (equals to e Radon function matrix) an longitudinal lines. If noing is done, crosswise lines, as against longitudinal lines, tend to be transformed to local peaks in e Hough parameter space and always tend to be selected as e most remarkable lines in e original image. In order to avoid is problem derived from e irregular shape of e original image, e original image is transformed onto e Hough parameter space as e number of projected pixels respect to leng of each projection line by using e Radon function. DEPTH-FROM-SLOPE After e image processing procedure mentioned above, e slope of feature pas in EPIs informs us e dep to e corresponding feature points in e scene. In is paper, we use e slope of e projective line from 1 to 180 degrees in stead of e dep between feature points and e lens center. However, calculation of dep from slope is important and it is described here. For e slope of e feature pa m, e distance D between e corresponding feature point and e lens center of e camera can be calculated by e following equation. v m p D =, θ 2 f tan 2 where v (m/s) is e velocity of e measuring vehicle, m is e slope of e feature pa, p (pixels) is e number of pixels of a line scan camera, f (Hz) is e - 8 -

9 frequency of scanning a line, and θ (degrees) is e angle of field of e camera. 180 Theta curve in splitting EPI by 300 pixels (DataNo.16) 160 reshold Theta (Degree of normal lines) Time (x300pixels) Figure 9 Slope of normal lines of e feature pas plotted against time wi split size 300 pixels. ESTIMATION OF THRESHOLD A reshold value for a dep curve is used in order to identify a street-parking vehicle. The reshold value is subject to e distance between a lens center and e side of a street-parking vehicle. To cite a case is distance between em is 2 meters, it is advisable to set e reshold value to about 2 meters adding half a wid of a vehicle. However, e distance between e lens center and a street-parking vehicle is not constant and can not be known always in various actual situations. It is almost impossible to know how long is e wid of each lane on e roads measuring, what lane e measuring vehicle travels along, and where exactly on e lane it travels. Thus, considering such a normal situation as Route 246 at Tokyo, a reshold value could be estimated as e following equation. Assuming at e measuring vehicle travel rough e center of e middle lane and a street-parking vehicle stop at e side of e left lane, Wmiddle Wprobe D W left + W common +, 2 2 where W left (m) is e wid of e most left lane on which ere is a street-parking vehicle, W middle (m) is e wid of e middle lane along which e measuring vehicle travels, W common (m) is e wid of e street-parking vehicle, and W probe (m) is e wid of e measuring vehicle in Figure 10. Actually Route 246 has W left wi 3 meters and W middle wi 3.2 meters. A street-parking vehicle is regarded as one of saloon-type vehicles wi W common as 1.8 meters, and e wid of our measuring vehicle,, is 1.72 meters. As a result, e estimated reshold value is calculated as W probe - 9 -

10 D = 1.94 (m). a street-parking vehicle W common Wprobe W probe W common W W middle left Figure 10 A traffic scene of Route 246 at Tokyo and wids of two lanes and vehicles. W left W middle a sidewalk e measuring vehicle Slope of a feature pa in an epipolar-plane image is in proportion to dep from a feature point. Corresponding to e estimated reshold value D, e angle φ of a normal line against a feature pa is equal to calculated as below. 180 tan 1 m 90 + (degrees), and is π φ 180 tan 1 m 90 = + π θ 2 D tan 180 f 1 tan 2 = + 90 π V p, where 0 < φ 180 degrees. On Route 246, e estimated reshold values φ respect to velocity of e measuring vehicle are summarized in e following table, where p = 2048 (pixels), f = 1.8 and 7 (khz), and θ = 62 (degrees). Table 1 Estimated reshold value φ wi respect to e measurement velocity. Velocity V (km/h) / (m/s) Estimated reshold value φ (degree) f = 1.8 (khz) f = 7 (khz) 10 (km/h) / 2.78 (m/s) (km/h) / 5.56 (m/s) (km/h) / 8.33 (m/s) (km/h) / 11.1 (m/s)

11 OUTDOOR EXPERIMENTAL RESULTS Our developed measuring vehicle measures vehicles parked on Route 246 in Tokyo, Japan. As a result of e measurement, an EPI as Figure 6 is obtained by a line scan camera, DALSA SP-14-02k30. Figure 6 shows at e vehicle parked on e street appears as ree light-gray belts in e middle of e image. The height of e camera is 900 mm, e velocity of e probe vehicle is 2.78 m/s (10km/h), e number of pixels of a line scan camera is 2048 pixels, e frequency of scanning a line is 7 khz, and e angle of field of e camera is 62 degrees. After e image processing, Figure 9 provides a plot of e slope of e feature pas m + 90 degree against time. There are e changes of e slope corresponding to e vehicle parked on e street in Figure 9. The distance D between e side surface of e street-parking vehicle and e measuring vehicle is calculated to be 1 meter for m = = 55 degrees by e equation. When e measuring vehicle goes straight at 4-level of velocities; 10 km/h, 20 km/h, 30 km/h and 40 km/h, e rate of detection of street-parking vehicles is summarized in Table 2. In is summary, e estimated reshold values are used and e rate of detection is calculated as e number of vehicles detected by is system respect to e sum of e number of vehicles parked actually and e number of vehicles detected wrongly. Relationship between velocity of e measuring vehicle and e rate of detection could not be found. The entire rate of detection is 71 %. Table 2 Rate of detection wi respect to 4-levels of velocities Velocity of e measuring vehicle (km/h) Rate of detection of street-parking vehicles (%) Distortion occurs from e specular reflection on e side of target vehicles, e reflection of e surroundings onto e side surface of target vehicles, and e special structure of target vehicles. The specular reflection inputs a strong light beam into e lens, and en obtained images are over a range of a CCD s sensitivity. The reflection of e surroundings causes e remarkable feature pas in EPIs. In e image processing, ey are detected wrongly. In results, wrong dep is calculated. Trucks have different body structure an common vehicles. Trailer and tractor could be detected separately to be two vehicles. CONCLUSIONS We proposed e meod of detecting street-parking vehicles using a line scan camera. By using e line scan camera, we obtain directly EPIs including features of target vehicles. Our detection meod is based on e EPI analysis wi a lateral motion of e camera. Feature pas are detected in e EPI by using e Hough transformation and eir slopes in e EPIs are proportional e distance between e lens center and e corresponding feature points in e scene

12 Threshold values are utilized for detection of street-parking vehicles. Threshold values depend on e road wids, velocity of e measuring vehicle, a camera s scanning frequency, and so on. Then, we introduced e estimated reshold values and examined to apply em to e measuring data. Using e estimated reshold values, our detection rate was 71%. The outdoor experiments show at our meod is available and effective for detecting ose vehicles in e real environment. Illumination conditions cause distortion; for example, specular reflection because e vehicles body have mirror surfaces, and reflection of e surroundings onto e body. Furermore, distortion also occurs from special structure of target vehicles, e.g. trailer and tractor. FUTURE WORKS In is paper, a line of CCD consists of 2048 pixels, however, feature points straight in front of a lens center are needed, and en only a few hundred pixels in e middle of e line are acquired and processed in e image analysis. It is enough to detect street-parking vehicles. A laser sensor can measure e distance directly and robustly in e outdoor environment. However, an image sensor like a camera has much more information from data. Thus, we will fuse an image sensor and a laser sensor, and improve e whole system. In addition, since EPIs can be obtained by a usual camera, our results also showed e possibility at a high frame-rate 2-D image sensor can detect street-parking vehicles. REFERENCES (1) S. Ono, et al., Parking-Vehicle Detection System by using Laser Range Sensor Mounted on a Probe Car, Proc. of Intelligent Vehicle Symposium (IV 2002), Poster Session 2, (2) R. C. Bolles, et al., Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion, International Journal of Compute Vision, vol.1, pp.7-55,

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Edge detection. Gradient-based edge operators

Edge detection. Gradient-based edge operators Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Change detection using joint intensity histogram

Change detection using joint intensity histogram Change detection using joint intensity histogram Yasuyo Kita National Institute of Advanced Industrial Science and Technology (AIST) Information Technology Research Institute AIST Tsukuba Central 2, 1-1-1

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

Segmentation I: Edges and Lines

Segmentation I: Edges and Lines Segmentation I: Edges and Lines Prof. Eric Miller elmiller@ece.tufts.edu Fall 2007 EN 74-ECE Image Processing Lecture 8-1 Segmentation Problem of breaking an image up into regions are are interesting as

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

On Road Vehicle Detection using Shadows

On Road Vehicle Detection using Shadows On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu

More information

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Epipolar geometry-based ego-localization using an in-vehicle monocular camera Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:

More information

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above Edge linking Edge detection rarely finds the entire set of edges in an image. Normally there are breaks due to noise, non-uniform illumination, etc. If we want to obtain region boundaries (for segmentation)

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

Construction of a 3D city map using EPI analysis and DP matching

Construction of a 3D city map using EPI analysis and DP matching Construction of a 3D city map using EPI analysis and DP matching Hiroshi KAWASAKI, Tomoyuki YATABE, Katsushi IKEUCHI, Masao SAKAUCHI Institute of Industrial Science, University of Tokyo 7 22 1 Roppongi,

More information

Constructing a 3D Object Model from Multiple Visual Features

Constructing a 3D Object Model from Multiple Visual Features Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

OBJECT detection in general has many applications

OBJECT detection in general has many applications 1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for

More information

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION Khairil Azmi, Shintaro Ono, Masataka Kagesawa, Katsushi Ikeuchi Institute of Industrial Science,

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003, Numerical Analysis and Computer Science, KTH Danica Kragic EXAM SOLUTIONS Computer Vision Course 2D1420 Thursday, 11 th of march 2003, 8.00 13.00 Exercise 1 (5*2=10 credits) Answer at most 5 of the following

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

(Refer Slide Time: 0:32)

(Refer Slide Time: 0:32) Digital Image Processing. Professor P. K. Biswas. Department of Electronics and Electrical Communication Engineering. Indian Institute of Technology, Kharagpur. Lecture-57. Image Segmentation: Global Processing

More information

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI

Advanced Stamping Manufacturing Engineering, Auburn Hills, MI RECENT DEVELOPMENT FOR SURFACE DISTORTION MEASUREMENT L.X. Yang 1, C.Q. Du 2 and F. L. Cheng 2 1 Dep. of Mechanical Engineering, Oakland University, Rochester, MI 2 DaimlerChrysler Corporation, Advanced

More information

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural

More information

Lane Markers Detection based on Consecutive Threshold Segmentation

Lane Markers Detection based on Consecutive Threshold Segmentation ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 6, No. 3, 2011, pp. 207-212 Lane Markers Detection based on Consecutive Threshold Segmentation Huan Wang +, Mingwu Ren,Sulin

More information

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to

More information

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily

More information

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.

Optics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. Optics 1- Light Nature: a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. The particles were either emitted by the object being viewed or emanated from

More information

AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY

AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY G. Takahashi a, H. Takeda a, Y. Shimano a a Spatial Information Division, Kokusai Kogyo Co., Ltd., Tokyo, Japan - (genki_takahashi, hiroshi1_takeda,

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc. Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING Kenta Fukano 1, and Hiroshi Masuda 2 1) Graduate student, Department of Intelligence Mechanical Engineering, The University of Electro-Communications,

More information

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA HUANG Xianfeng State Key Laboratory of Informaiton Engineering in Surveying, Mapping and Remote Sensing (Wuhan University), 129 Luoyu

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Mapping textures on 3D geometric model using reflectance image

Mapping textures on 3D geometric model using reflectance image Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp

More information

Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform

Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform Send Orders for Reprints to reprints@benthamscience.net 58 The Open Mechanical Engineering Journal, 2014, 8, 58-62 Open Access Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

A METHOD OF MAP MATCHING FOR PERSONAL POSITIONING SYSTEMS

A METHOD OF MAP MATCHING FOR PERSONAL POSITIONING SYSTEMS The 21 st Asian Conference on Remote Sensing December 4-8, 2000 Taipei, TAIWA A METHOD OF MAP MATCHIG FOR PERSOAL POSITIOIG SSTEMS Kay KITAZAWA, usuke KOISHI, Ryosuke SHIBASAKI Ms., Center for Spatial

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

DTU M.SC. - COURSE EXAM Revised Edition

DTU M.SC. - COURSE EXAM Revised Edition Written test, 16 th of December 1999. Course name : 04250 - Digital Image Analysis Aids allowed : All usual aids Weighting : All questions are equally weighed. Name :...................................................

More information

Conceptual Physics Fundamentals

Conceptual Physics Fundamentals Conceptual Physics Fundamentals Chapter 14: PROPERTIES OF LIGHT This lecture will help you understand: Reflection Refraction Dispersion Total Internal Reflection Lenses Polarization Properties of Light

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Conceptual Physics 11 th Edition

Conceptual Physics 11 th Edition Conceptual Physics 11 th Edition Chapter 28: REFLECTION & REFRACTION This lecture will help you understand: Reflection Principle of Least Time Law of Reflection Refraction Cause of Refraction Dispersion

More information

Study on Gear Chamfering Method based on Vision Measurement

Study on Gear Chamfering Method based on Vision Measurement International Conference on Informatization in Education, Management and Business (IEMB 2015) Study on Gear Chamfering Method based on Vision Measurement Jun Sun College of Civil Engineering and Architecture,

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I) Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing

More information

Tracking Trajectories of Migrating Birds Around a Skyscraper

Tracking Trajectories of Migrating Birds Around a Skyscraper Tracking Trajectories of Migrating Birds Around a Skyscraper Brian Crombie Matt Zivney Project Advisors Dr. Huggins Dr. Stewart Abstract In this project, the trajectories of birds are tracked around tall

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra) Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The

More information

Light and the Properties of Reflection & Refraction

Light and the Properties of Reflection & Refraction Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

ACTIVITY TWO CONSTANT VELOCITY IN TWO DIRECTIONS

ACTIVITY TWO CONSTANT VELOCITY IN TWO DIRECTIONS 1 ACTIVITY TWO CONSTANT VELOCITY IN TWO DIRECTIONS Purpose The overall goal of this activity is for students to analyze the motion of an object moving with constant velocity along a diagonal line. In this

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT 3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Detection and Classification of Painted Road Objects for Intersection Assistance Applications

Detection and Classification of Painted Road Objects for Intersection Assistance Applications Detection and Classification of Painted Road Objects for Intersection Assistance Applications Radu Danescu, Sergiu Nedevschi, Member, IEEE Abstract For a Driving Assistance System dedicated to intersection

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

DEVELOPMENT OF THE IMAGE PROCESSING VEHICLE DETECTOR FOR INTERSECTIONS

DEVELOPMENT OF THE IMAGE PROCESSING VEHICLE DETECTOR FOR INTERSECTIONS EVELOPMENT OF THE IMGE PROESSING VEHILE ETETOR FOR INTERSETIONS Yoshihiro Sakamoto 1*, Koichiro Kajitani 1, Takeshi Naito 1 and Shunsuke Kamijo 2 1.Omron orporation, 2-2-1 Nishikusatsu, Kusatsu-shi, Shiga,

More information

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY 1 K. Sravanthi, 2 Mrs. Ch. Padmashree 1 P.G. Scholar, 2 Assistant Professor AL Ameer College of Engineering ABSTRACT In Malaysia, the rate of fatality due

More information

The Ray model of Light. Reflection. Class 18

The Ray model of Light. Reflection. Class 18 The Ray model of Light Over distances of a terrestrial scale light travels in a straight line. The path of a laser is now the best way we have of defining a straight line. The model of light which assumes

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Answers to practice questions for Midterm 1

Answers to practice questions for Midterm 1 Answers to practice questions for Midterm Paul Hacking /5/9 (a The RREF (reduced row echelon form of the augmented matrix is So the system of linear equations has exactly one solution given by x =, y =,

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Robot vision review. Martin Jagersand

Robot vision review. Martin Jagersand Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Reflection & Mirrors

Reflection & Mirrors Reflection & Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media A ray of light is

More information

Fundamental Technologies Driving the Evolution of Autonomous Driving

Fundamental Technologies Driving the Evolution of Autonomous Driving 426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo

More information

Experiment 8 Wave Optics

Experiment 8 Wave Optics Physics 263 Experiment 8 Wave Optics In this laboratory, we will perform two experiments on wave optics. 1 Double Slit Interference In two-slit interference, light falls on an opaque screen with two closely

More information

Applications of Piezo Actuators for Space Instrument Optical Alignment

Applications of Piezo Actuators for Space Instrument Optical Alignment Year 4 University of Birmingham Presentation Applications of Piezo Actuators for Space Instrument Optical Alignment Michelle Louise Antonik 520689 Supervisor: Prof. B. Swinyard Outline of Presentation

More information

Spatio temporal Segmentation using Laserscanner and Video Sequences

Spatio temporal Segmentation using Laserscanner and Video Sequences Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D 89081

More information

EN1610 Image Understanding Lab # 3: Edges

EN1610 Image Understanding Lab # 3: Edges EN1610 Image Understanding Lab # 3: Edges The goal of this fourth lab is to ˆ Understanding what are edges, and different ways to detect them ˆ Understand different types of edge detectors - intensity,

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Ground Plane Motion Parameter Estimation For Non Circular Paths

Ground Plane Motion Parameter Estimation For Non Circular Paths Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew

More information

A Symmetry Operator and Its Application to the RoboCup

A Symmetry Operator and Its Application to the RoboCup A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,

More information

What is Frequency Domain Analysis?

What is Frequency Domain Analysis? R&D Technical Bulletin P. de Groot 9/3/93 What is Frequency Domain Analysis? Abstract: The Zygo NewView is a scanning white-light interferometer that uses frequency domain analysis (FDA) to generate quantitative

More information

Hypothesis Generation of Instances of Road Signs in Color Imagery Captured by Mobile Mapping Systems

Hypothesis Generation of Instances of Road Signs in Color Imagery Captured by Mobile Mapping Systems Hypothesis Generation of Instances of Road Signs in Color Imagery Captured by Mobile Mapping Systems A.F. Habib*, M.N.Jha Department of Geomatics Engineering, University of Calgary, Calgary, AB, Canada

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information