RECENTLY, customers have shown a growing interest in

Size: px
Start display at page:

Download "RECENTLY, customers have shown a growing interest in"

Transcription

1 406 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Scanning Laser Radar-Based Target Position Designation for Parking Aid System Ho Gi Jung, Member, IEEE, Young Ha Cho, Pal Joo Yoon, and Jaihie Kim Abstract Recently, customers have shown a growing interest in parking aid products. A parking aid system consists of target position designation, path planning, and parking guidance by user interface or path tracking by active steering. For target position designation, various sensors and signal processing technologies have been tried. In the case of parallel parking, an ultrasonic sensor-based method plays a dominant role. In the case of perpendicular parking, a graphical user interface (GUI)-based method and a parking slot marking-based method have been commercialized. However, methods developed for the recognition of free parking space between vehicles have their respective drawbacks. This paper proposes a method for the recognition of free parking space between vehicles using scanning laser radar. This proposed method consists of range data preprocessing, corner detection, and target parking position designation. The authors developed a novel corner-detection method consisting of rectangular corner detection and round corner detection. The newly developed corner detection is unaffected by cluster orientation and the range data interval and is robust to noise. Experimental results showed that even in situations where other methods failed, the proposed scanning laser radar-based method could designate target parking position to viable free parking space. The recognition rate was 98.21%, and the average processing time was about 600 ms. Finally, it is argued that the proposed method will eventually be a practical solution because of the decreasing price of scanning laser radar and multiple-function integration strategies. Index Terms Driver-assist system (DAS), parking aid system, scanning laser radar, target position designation. I. INTRODUCTION RECENTLY, customers have shown a growing interest in parking aid products. According to J. D. Power s 2001 Emerging Technologies Study, 66% of customers indicated that they were likely to purchase parking aid products [1]. The Top 10 High-Tech Car Safety Technologies published by T. Tellem included the rearview camera [2]. Such customer interests were connected to actual purchases. In 2003, the Toyota Prius launched an adopted intelligent parking assist, automating the steering maneuver of backup parking as an optional feature, which was opted into by about 80% of Prius Manuscript received August 15, 2007; revised January 10, 2008 and March 15, The Associate Editor for this paper was L. Vlacic. H. G. Jung is with Global R&D H.Q., MANDO Corporation, Yongin , Korea, and also with the School of Electrical and Electronic Engineering, Yonsei University, Seoul , Korea ( hgjung@mando.com; hgjung@yonsei.ac.kr). Y. H. Cho and P. J. Yoon are with Global R&D H.Q., MANDO Corporation, Yongin , Korea ( haya0709@mando.com; pjyoon@ mando.com). J. Kim is with the School of Electrical and Electronic Engineering, Yonsei University, Seoul , Korea ( jhkim@yonsei.ac.kr). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TITS buyers [3]. Based on customers increasing interest and the successful introduction of early products, many car and component manufacturers are competitively preparing the release of more parking aid products [4], [5]. Various types of parking aid products are being developed: 1) displaying the predicted path based on steering angle and distance markings graphically on the rearview image [6], [7]; 2) displaying an overhead view image around the subjective vehicle by mosaicking images captured from multiple cameras pointed toward various directions or by mosaicking consecutive images captured from a rearward camera with obstacle information detected by ultrasonic sensors [8], [9]; 3) informing the driver of the required steering maneuver by visual or audio interface [10] [13]; 4) automating the steering maneuver to drive the subjective vehicle to an initial position required for safe and convenient parking operation [14]; 5) automating the steering maneuver from an initial position to the target parking position [3] [6], [13], [15] [19]; 6) fully automatic parking [20], [21]. The preferred type of parking aid product varies according to the customer s regional characteristics. In Europe, parallel parking is dominant, whereas in Asia, customers are more interested in perpendicular parking. On the other hand, in the U.S., there is great demand for backward monitoring and private garage solutions. A parking aid system consists of target position designation, path planning, and parking guidance by user interface or path tracking by active steering. The methods of target position designation can be divided into four categories. 1) User-interface-based method: An interactive method with steering maneuver and augmented display [7] locates target position with arrow buttons on the graphical user interface [6] and moves and rotates the target position by a drag-and-drop operation like a cursor [22]. 2) Parking slot marking-based method: This automatically recognizes a parking slot by image understanding [23] [26] and utilizes hints provided by the driver through a touch screen [25]. 3) Method based on free space between parked vehicles: This includes a method proposed in this paper and will be explained in detail in the following paragraph. 4) Infrastructure-based method: This designates target position utilizing local Global Positioning System, digital map, and communication with parking management system [11], [12] /$ IEEE

2 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 407 Again, the method based on the free space between parked vehicles can be divided into seven categories according to the type of sensor and the signal processing technology being used. 1) Ultrasonic sensor-based method: This is the most common target position designation method for parallel parking. The system collects range data as the vehicle passes by a free parking space and then registers the range data using odometry to construct the depth map of side region of the subjective vehicle. To precisely measure the edges of the free parking space, Siemens developed a new sensor with modified sensing area, that is, horizontally narrow and vertically wide [16]. Linköping University and Toyota both utilized the correlation between multiple range data sets using multilateration [15], [28] and rotational correction [27], respectively. 2) Short range radar (SRR) sensor-based method: For parallel parking, the SRR sensor was tested instead of the ultrasonic sensor [10]. A method that improves the angular accuracy with the synthetic aperture radar algorithm was proposed [30]. 3) Single-image understanding-based method: This uses pattern-recognition-based free space detection; horizontal edge-based vehicle position detection has also been experimentally tried [31]. 4) Motion stereo-based method: IMRA Europe developed a system that provides a driver with a virtual image from the optimal viewpoint by intermediate view reconstruction. The system reconstructs 3-D information via odometry and using features tracked through consecutive images captured while the subjective vehicle is moving [32] [35]. Shur et al. proposed a system that could recognize 3-D information from consecutive images, even without the help of odometry, and then designate the target position [36]. 5) Binocular stereo vision-based method: The system in [37] reconstructs 3-D information by feature-based stereo matching and iterative closest-point algorithm with respect to the vehicle model. It then designates the target position. The system in [38] separates the feature-based stereo matching results into ground points and obstacle points according to the precalibrated homography of the ground surface and searches for the optimal target position nearest parking slot markings and farthest from obstacles [38]. 6) Light stripe projection-based method: The system recognizes 3-D information by analyzing the light stripe made by a light plane projector and reflected back from objects. It was proposed as a solution for dark underground parking lots where passive vision-based methods usually fail [39]. 7) Scanning laser radar-based method: Schanz et al. vertically installed the scanning laser radar on the side of the subjective vehicle; the radar then collected range data while passing by the free parking space. They proposed a system that constructed the depth map by registering range data with odometry and then recognizing free parking space. In this case, the scanning laser radar was used as a precise ultrasonic sensor with narrow field of view (FOV) [20], [40], [41]. The CyCab project horizontally installed the scanning laser radar on the front end of the subjective vehicle; the radar recognized the locations of parked vehicles. They utilized the vehicle locations for path planning and simultaneous localization and mapping [42]. The authors have already proposed a novel driverassist system called integrated side/rear safety system, which horizontally installed one scanning laser radar on the left side of the subjective vehicle s rear end to incorporate four system functions, i.e., blind spot detection, rear collision warning system, target position designation for parallel parking, and target position designation for perpendicular parking [43]. Although the methods developed earlier achieved partial success, each has its drawbacks. Vision-based methods, including manual designation, cannot be used in dark illumination conditions. In particular, specular reflection and the smooth surface of the typical vehicle degrade the performance of binocular stereo and motion stereo. Moreover, if the color of a vehicle is dark, such as black, any feature extractor cannot detect useful feature points on the vehicle surface. Active stripe projectionbased methods cannot be used outdoors during the day because sunlight will overpower the entire spectrum range of the imaging device [39]. It is reported that the ultrasonic sensor in parallel parking situations acquires practically useful range data because the incident angle between the sensor and the side facet of the objective vehicle is approximately zero. However, in perpendicular parking situations, the incident angle between the ultrasonic sensor and the side facets of adjacent vehicles is so large that range recognition usually fails [29]. Although SRR is expected to robustly detect the existence of vehicles, it is not applicable for detecting the vehicle boundary on the near side of the subjective vehicle. In general, SRR has a low angular accuracy and outputs only for a limited number of range data. Furthermore, the outputs are not deterministic because response strengths are considerably sensitive to the object s shape, orientation, and reflectance [30]. Schanz et al. and the CyCab project can be considered as good examples of using the advantages of scanning laser radar as it can precisely recognize the boundary of a parked vehicle in both the daytime and at night. Although Schanz et al. used scanning laser radar as a precise side range sensor, they still had drawbacks in terms of the odometry-based registration, such as potential inaccuracy, restricted driving path during free space sensing, and memory/computing-intensive grid operation [20], [40], [41]. Furthermore, the CyCab project used almost the same configuration as the proposal of the authors; their vehicle boundary recognition used impractical assumptions that all parked vehicles belong to only one class of vehicle [42]. The earlier work of the authors showed that L -shaped template matching could robustly recognize the boundary information of parked vehicles [43]. Such an approach is different from adaptive-cruise-control-oriented scanning laser radar applications, which focuses on simple invariant description and the detection/tracking of vehicles in the forward and far distance [44].

3 408 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 This paper proposes a scanning laser radar-based target position designation method for perpendicular parking situations. The proposed method uses only one range data set without odometry and assumes that a vehicle appears as an L -shaped cluster in the range data. Target parking position is expected to be the nearest free parking space between parked vehicles. The proposed algorithm consists of three phases, i.e., preprocessing of range data, corner detection, and target parking position designation. The preprocessing of range data consists of noise elimination, occlusion detection, and cluster recognition. Corner detection consists of two phases, i.e., rectangular corner detection and round corner detection. Target parking position designation consists of the main reference vehicle recognition, subreference vehicle recognition, and target position establishment. Compared with the earlier algorithm [43], round corner detection is added to precisely recognize the round front end of vehicles. Furthermore, because the newly developed corner detection methods use the least assumption and normalized fitness, they are proved robust to various noises and situations. Target parking position designation is also improved to consider the surrounding environment when detecting a free parking slot. The algorithm that was tested in various situations overcame almost all conditions that were critical to other methods. In Section VI, the authors discuss the expected drawback of the proposed method, that is, expensive sensor price, which could be compensated by multifunction integration and could be relieved in the near future by a price decrease. II. PREPROCESSING OF RANGE DATA Preprocessing eliminates the noise of range data from scanning laser radar and outputs clusters, which are inputs of corner detection. A. Removal of Invalid and Isolated Data Fig. 1 shows a typical situation wherein a driver is trying to park the subjective vehicle between parked vehicles, which is the main situation considered in this paper. Fig. 1(a) shows an image captured by the rearview camera of the subjective vehicle, and Fig. 1(b) depicts the range data captured by scanning laser radar installed on the left side of the subjective vehicle s rear end. In Fig. 1(b), the x-axis represents the scanning angle, and the y-axis represents the range value. Because the scanning laser radar only outputs one range value for each scanning angle or pan angle, the range data can be stored and processed in 1-D array with a fixed length. Preprocessing eliminates the invalid data having zero range value and the isolated data caused by random noise. The isolated data are defined as a range data whose minimum value of distances to two neighboring data, that is, left and right neighbor, is larger than a threshold, for example, 0.5 m. Fig. 2 shows the result of invalid data and isolated data removal. B. Occlusion Detection Occlusion is defined as a point where the consecutive range value is discontinuous, and a sequence of continuous valid range data between two occlusions is defined as a cluster. Fig. 1. Situation wherein the free parking space is between parked vehicles. (a) Rearview image. (b) Range data. Fig. 2. Range data after the removal of invalid data and isolated data. While investigating the range data arranged in the scanning angle, a transition from invalid to valid data is recognized as the left end of the cluster and a vice-versa transition as

4 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 409 Fig. 3. Occlusion detection result. (a) Polar coordinate system. (b) Cartesian coordinate system. the right end of the cluster. In other words, if the difference between two consecutive range values [(n 1)th and nth range data] is larger than a threshold, for example, 0.5 m, then the (n 1)th data are recognized as the right end of a cluster, and the nth data are recognized as the left end of the next cluster. Fig. 3 shows the result of occlusion detection. The left and right ends of the cluster are represented by a circle and a rectangle, respectively. Fig. 3(b) shows the valid range data and recognized occlusions in Cartesian coordinate system. The subjective vehicle is depicted as a filled rectangle, and its left rear end is located at coordinates (0, 0). C. Fragmentary Cluster Removal Clusters with a very small size are supposedly caused by noise. If either the range data number from the left end to Fig. 4. Result of fragmentary cluster removal. (a) Polar coordinate system. (b) Cartesian coordinate system. the right end of a cluster is less than its threshold (e.g., 5) or the geometrical length is less than its threshold (e.g., 0.25 m), then the cluster is regarded as a fragment and is eliminated. If the neighboring clusters of the eliminated cluster are near and codirectional, linear interpolation connects the neighboring two clusters. Fig. 4 shows the result of fragmentary cluster removal. III. CORNER DETECTION It is assumed that the range data from the scanning laser radar are acquired in a parking lot. Because a major recognition target, such as a vehicle and a pillar, appears as an L -shaped cluster in range data, simple corner detection can recognize vehicle and pillar. On the other hand, objects apart from vehicle

5 410 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 5. Range data after the removal of invalid data and isolated data. and pillar do not have L -shaped clusters, and these can be ignored during the following procedures. Corner detection consists of rectangular corner detection and round corner detection, as depicted in Fig. 5. A large number of vehicles have rectangular front and rear ends, which can be fitted by a rectangular corner. The rectangular corner can be represented by two lines orthogonally meeting. Some vehicles have front ends with such large curvatures that should be fitted by a round corner. The round corner can be represented by a line and an ellipse meeting at a point. If the fitting error of rectangular corner detection is less than a threshold, for example, 0.2, it is recognized as a rectangular corner. If the fitting error is larger than the first threshold but less than a marginal threshold, for example, 0.6, it is retested by round corner detection. If the fitting error of round corner detection is less than the threshold, for example, 0.2, it is recognized as a round corner. Otherwise, it is determined to be unrelated to the vehicle or the pillar and can be ignored. A. Rectangular Corner Detection For each point of a cluster, assuming the point is the vertex of the L shape, one optimal corner is detected in the sense of least-squared (LS) error. Among the detected corners, one with the smallest fitting error is recognized as the corner candidate of the cluster. An L -shaped cluster is supposed to consist of two lines that meet orthogonally. Points before the vertex of corner should be close to the first line, and points after the vertex should be close to the second line. Therefore, L -shaped template matching is equal to an optimization problem minimizing the sum of fitting errors, fitting error of points before the vertex with respect to the first line, and fitting error of points after the vertex with respect to the second line, given that the two lines are orthogonal. Fig. 6. Range data after the removal of invalid data and isolated data. Because the two lines are orthogonal, the first line l 1 and the second line l 2 can be represented as l 1 : ax + by + c =0 l 2 : bx ay + d =0. (1) Given that the number of points of a cluster is N and the index of the investigated point P n is n, as shown in Fig. 6, points with index from 1 to n should satisfy l 1, and points with index from n to N should satisfy l 2. This fact is expressed in linear algebraic form as x 1 y a x n y n 1 0 b = 0 (2) y n x n 0 1 c. y N } x N {{ 0 1 } A n d }{{} X n where A n denotes the measurement matrix, and X n denotes the parameter matrix when the investigated point index is n. The nontrivial parameter matrix X n satisfying (2) is the null vector of the measurement matrix A n. Therefore, the parameters of l 1 and l 2 can be estimated by finding the null vector of A n using singular value decomposition (SVD) as [U n, S n, V n ]=SVD(A n ) (3) where U n, S n, and V n, respectively, denote the output basis matrix, singular value matrix, and input basis matrix of A n when the investigated point index is n. Because the singular value corresponding to the null vector is ideally zero, a nonzero singular value can be considered as the error measure. When the investigated point index is n, the rectangular corner fitting error of cluster C is S n (4, 4), which is denoted

6 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 411 Fig. 8. Vertex of corner is refined to the cross point of two recognized lines. However, rectangular corner detection has one problem when some endpoints of a cluster having a line shape deviate from the line center, as shown in Fig. 9(a). In this case, although the whole structure of the cluster is definitely a line, the deviated points force E rectangular corner (C, n) to have a small value, as shown in Fig. 9(b). To avoid this pitfall, the line fitting error, that is, LS fitting error, is employed when the cluster is fitted to a line. If all points in a cluster C belong to a line, they should satisfy a line equation l in l : ax + by + c =0. (4) Fig. 7. Rectangular corner fitting error and corner candidate. (a) E rectangular corner (C, n). (b) Corner candidate. by E rectangular corner (C, n). Corresponding to this, the fourth column of V n is the estimated parameter set. After measuring E rectangular corner (C, n) of points with index from 2 to (N 1) in cluster C, one case with the smallest value is recognized as the corner candidate of the cluster. Fig. 7(a) shows the graph of the measured E rectangular corner (C, n) of points with index from 2 to (N 1), and Fig. 7(b) shows the recognized corner candidate. With recognized parameters, two lines are drawn, and the point with the smallest fitting error, i.e., n =6, is marked by o. Fig. 8 shows the cross point of two recognized sides (x c,y c ), which is recognized as the refined vertex of the corner. The two unit vectors parallel to the two sides d 1 and d 2 are calculated. In this case, d 1 is set to be in the direction of the longer side, and d 2 is set to be in the direction of the shorter side. This fact can be expressed in linear algebraic form as x 1 y 1 1. a b = 0. (5) x N y N 1 c }{{}}{{} B X The nontrivial parameter matrix X is the null vector of the measurement matrix B, and the singular value corresponding to the null vector is the line fitting error of cluster C, which is denoted by E line (C). The error of the corner candidate of a cluster C, which is denoted by E corner (C) and given by E corner (C) = min n E rectangular corner (C, n) E line (C) is defined by the minimum of E rectangular corner (C, n) divided by E line (C). Because the E line (C) value of a cluster in the line shape is small, its E corner (C) has a relatively large value, despite a small E rectangular corner (C, n). Consequently, normalization by E line (C) prevents the cluster in the line shape from being recognized as a rectangular corner. Fig. 10 shows the (6)

7 412 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 9. Cluster having a line shape also has a small E rectangular corner (C, n) value. (a) A cluster having line shape. (b) E rectangular corner (C, n). effect of normalization by E line (C): The cluster in the L shape has a small error value, whereas the cluster in the line shape definitely has a large error value. The number beside the recognized corner denotes E corner (C). The newly developed rectangular corner detection is proved to be unaffected by orientation and robust to noise as it is based on the LS fitting of the implicit function. Because it does not use any kind of assumption about line length and distance between range data, it is expected to be superior to the conventional rectangular corner detection that divides points into two groups by potential vertex and fits the groups into two lines, respectively, and then finds the optimal vertex by evaluating the rectangular constraint. Fig. 11(a) shows that the newly developed rectangular corner detection is unaffected by orientation because it is based on an implicit expression. Fig. 11(b) shows that the Fig. 10. Comparison of E corner(c) between corner and line case. (a) Corner cluster; E corner = (b) Line cluster; E corner = developed L -shaped template matching can detect the correct rectangular corner in spite of heavy noise and an uneven point interval. B. Round Corner Detection Rectangular corner detection cannot avoid severe error when the front or rear end of a vehicle is considerably round, as shown in Fig. 12(a), because such range data do not follow the assumption of rectangular corner detection that the vehicle corner can be modeled by two lines meeting orthogonally. Therefore, clusters that marginally failed in rectangular corner detection are retested by round corner detection, which models the vehicle corner by the connection of a line and a curve.

8 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 413 In the case of line curve corner detection, given that the number of points of a cluster is N and the index of the investigated point P n is n, points with index from 1 to n should satisfy l 1, and points with index from n to N should satisfy e 2, as given by l 1 : px + qy + r =0 e 2 : ax 2 + bxy + cy 2 + dx + ey + f =0. (7) By applying SVD to points with index from 1 to n of cluster C, the parameters of the line equation l 1 can be estimated. The fitting error of the line portion E line portion (C, n) = n (px i + qy i + r) 2 (8) i=1 is defined by the summation of squared algebraic errors. By applying stable direct LS (SDLS) ellipse fitting [45] to points with index from n to N of cluster C, the parameters of the ellipse equation e 2 can be estimated. The fitting error of the ellipse portion E ellipse portion (C, n)= N ( ax 2 i +bx i y i +cyi 2 +dx i +ey i +f ) 2 i=n is defined by the summation of squared algebraic errors. When the investigated point is the nth point of cluster C, the line curve fitting error E line curve corner (C, n) = E line portion (C, n)+e ellipse portion (C, n) (10) (9) Fig. 11. Newly developed rectangular corner detection is unaffected by orientation and robust to noise. (a) Case of different orientation. (b) Case with heavy noise and uneven sampling. Without losing generality, the curve is formulated by an ellipse arc. Fig. 12(b) shows the correct corner recognized by round corner detection. Round corner detection fulfills line curve corner detection and curve line corner detection, and then selects one with a smaller fitting error as the corner candidate of a cluster. According to this viewpoint, a vehicle with a round front or rear end appears as either a line curve combination or a curve line combination in range data. Line curve corner detection fulfills LS fitting, assuming a corner is composed of a line and an ellipse arc. Contrarily, curve line corner detection fulfills LS fitting, assuming a corner is composed of an ellipse arc and a line. is defined as the sum of E line portion (C, n) and E ellipse portion (C, n). After evaluating E line curve corner (C, n) for points of cluster C with index from 5 to (N 5), a corner with the minimum fitting error is recognized as the line curve corner candidate of the cluster. Similar to the case of rectangular corner detection, the error of the line curve corner candidate of the cluster C E line curve corner (C) = min n E line curve corner (C, n) E line (C) (11) is defined by the minimum of E line curve corner (C, n) divided by E line (C). ThetermE line (C) denotes the line fitting error as defined in (4) and (5), and gives (11) with normalization effect. It is noteworthy that the index n in (8) and (9) has a different meaning, i.e., the final index in (8) and the initial index in (9). In the case of curve line corner detection, given that the number of points of a cluster is N and the index of the investigated point P n is n, points with index from 1 to n should satisfy e 1, and points with index from n to N should satisfy l 2, which is given by e 1 : ax 2 + bxy + cy 2 + dx + ey + f =0 l 2 : px + qy + r =0. (12) It is the reverse case of (7).

9 414 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 12. Effect of round corner detection. (a) Rectangular corner detection result. (b) Round corner detection result. By applying SDLS ellipse fitting to points with index from 1 to n of cluster C, the parameters of the ellipse equation e 1 can be estimated. The fitting error of the ellipse portion E ellipse portion (C, n)= n ( ax 2 i +bx i y i +cyi 2 +dx i +ey i +f ) 2 i=1 (13) is defined by the summation of squared algebraic errors. By applying SVD to points with index from n to N of cluster C, the parameters of the line equation l 2 can be estimated. The fitting error of the line portion E line portion (C, n) = N (px i + qy i + r) 2 (14) i=n is defined by the summation of squared algebraic errors. When the investigated point is the nth point of cluster C, the curve line fitting error E curve line corner (C, n) = E ellipse portion (C, n)+e line portion (C, n) (15) is defined as the sum of E ellipse portion (C, n) and E line portion (C, n). After evaluating E curve line corner (C, n) for points of cluster C with index from 5 to (N 5), a corner with the minimum fitting error is recognized as the curve line corner candidate of the cluster. Similar to the case of line curve corner detection, the error of the curve line corner candidate of the cluster C E curve line corner (C) = min n E curve line corner (C, n) E line (C) (16) is defined by the minimum of E curve line corner (C, n) divided by E line (C). It is worth noting that the index n in (13) and (14) has different meanings, i.e., the final index in (13) and the initial index in (14), which makes the differences between (8) and (14) and between (9) and (13). Round corner detection of cluster C fulfills both line curve detection and curve line detection. As a result, two fitting errors are evaluated, i.e., E line curve corner (C) is defined by (11) and E curve line corner (C) is defined by (16). The result of the round corner detection, that is, new corner candidate of the cluster, is set to the case with smaller fitting error. The error of the corner candidate of cluster C E corner (C)=min (E line curve corner (C), E curve line corner (C)) (17) is defined as the minimum of two fitting errors. Consequently, only if E corner (C) is smaller than a threshold, for example, 0.2, is the cluster recognized as a valid corner. Otherwise, the cluster is ignored in (17) If a cluster is recognized as a round corner, d 1 issettobein the direction from connecting point to line segment endpoint, and d 2 is set to be parallel with the ellipse long axis in the direction from the connecting point to the ellipse center. The vertex of a round corner is set by finding the cross point of the line and ellipse long axis and then shifting it in the direction of ellipse short axis by ellipse short radius. Fig. 13 shows the result of round corner detection when a cluster has a curve line type. Fig. 13(a) (c) depicts the graphical representations of E line portion (C, n), E ellipse portion (C, n), and E line curve corner (C, n) of line curve corner detection, respectively. Fig. 13(d) shows the result of line curve corner detection. Fig. 13(e) (g) depicts the graphical representations of E line portion (C, n), E ellipse portion (C, n),

10 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 415 Fig. 13. Results of round corner detection when the cluster is curve line type. (a) E line portion (C, n) of E line curve corner (C, n). (b)e ellipse portion (C, n) of E line curve corner (C, n). (c) E line curve corner (C, n). (d) Result of line curve detection. (e) E line portion (C, n) of E curve line corner (C, n). (f) E ellipse portion (C, n) of E curve line corner (C, n). (g)e curve line corner (C, n). (h) Result of curve line corner detection. (i) Final result of round corner detection. and E curve line corner (C, n) of curve line corner detection, respectively. Fig. 13(h) shows the result of curve line corner detection. Comparing Fig. 13(c) and (g), it is observed that E curve line corner (C) is smaller than E line curve corner (C). Therefore, E corner (C) is set to E curve line corner (C), and because its value is smaller than the threshold, the cluster is recognized as round corner in the curve line type, as shown in Fig. 13(i). The number beside the corner point denotes E corner (C). IV. DESIGNATION OF TARGET PARKING POSITION Based on the information of detected corners, target parking position designation establishes the geometrical location and orientation of a viable free parking space as the target position. A. Main Reference Corner Recognition Among the corners satisfying free parking space conditions within the perpendicular parking region of interest (ROI), the nearest to the subjective vehicle is recognized as the main reference corner. The main reference corner denotes the corner that belongs to one of two vehicles in contact with the free parking space and appears as an L shape in the range data. It is assumed that the range data are acquired from a location where the driver manually starts perpendicular parking. Therefore, the target parking position is supposed to be within the ROI, which is established by the FOV and maximum distance. In experiments, the FOV is set to rearward 160, and the maximum distance is set to 25 m. Corners out of the ROI are regarded as irrelevant to the parking operation and are thus ignored. Fig. 14(a) and (b) shows the results of corner detection

11 416 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 15. Free parking space condition. Does the corner contact with viable free parking space? Fig. 16. Corners satisfying free parking space conditions. Fig. 14. Corners that remained only within ROI. (a) Initially detected corners. (b) Remaining corners after ROI application. and ROI application, respectively. Only corners with E corner (C) will be examined in the next phases. By checking whether there is any object in the reverse direction of d 1 and d 2 of each corner, it is investigated whether each corner contacts the free parking space (Fig. 15). Further, by finding clusters within a certain FOV (e.g., 90 )inthe reverse direction of d 1 and d 2, respectively, two conditions for each direction are tested, i.e., whether there is any cluster within the distance of vehicle width and whether there is any cluster within the distance of vehicle length. If the investigated corner contacts with the viable free parking space, a cluster should exist between the vehicle width distance and the vehicle length distance in one direction, but no cluster should exist within the vehicle length distance in the other direction. If there is any cluster within the vehicle width distance in either direction, or if there is no cluster within the vehicle length distance in both directions, then the corner is determined not to be adjacent to a viable free parking space. It is noteworthy that the proposed system regards the situation when there is no cluster in both directions within vehicle length distance as an erroneous situation. In other words, such a situation is caused by sensing error or a situation with a wide parking space. In the latter case, the feasibility of automatic parking is expected to be so low that there is little need to establish

12 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 417 Fig. 17. Recognized main reference corner. Fig. 19. Established final target parking position. Fig. 18. Subreference corner projection and outward border. Fig. 20. corner. Case when outward border is set by the projection of subreference target parking position, in spite of the possibility of sensing error. During corner detection, in the case of rectangular corner, d 1 is set to be in the direction of the longer side, and d 2 is set to be in the direction of the shorter side. In the case of round corner, d 1 is set to be in the direction of the line portion, and d 2 is set to be in the direction of the longer axis of the ellipse. After the investigation of free parking space conditions, d 1 issettobein the direction where there is no cluster, and d 2 issettobeinthe direction where cluster is found. Fig. 16 shows the remaining corners after the investigation of free parking space conditions. It can be observed that although, initially, there are many detected corners, corners satisfying free parking space conditions are few. Fig. 17 shows the recognized main reference corner by selecting the corner nearest the subjective vehicle. Fig. 21. Sensor installation. B. Subreference Corner Detection and Target Parking Position Establishment Among the occlusions and corners located within a certain FOV (e.g., 90 ) in the reverse direction of d 2 of the main

13 418 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 22. Cloudy day at outdoor parking lot. reference corner, the nearest is recognized as the subreference corner, which will be used to establish target parking position. Vehicles or pillars adjacent to the free parking space can have severely different coordinates in depth direction. In such cases, the line connecting the main reference corner and the subreference corner cannot be aligned to the direction of the desired target parking position. To solve this problem, the two coordinates in d 1 axis are compared: the projection of subreference corner onto a line passing the vertex of the main reference corner with d 1 direction and the vertex of the main reference corner, as shown in Fig. 18. The line located farther in depth direction is used as an outward border of target parking position. Fig. 18 shows the established outward border using subreference corner and main reference corner. Once the main reference corner, subreference corner, and outward border are established, target parking position can be established by locating the center of width side of target parking position at the center point between the main reference corner and the subreference corner along the outward border. Target parking position is a rectangle with the same width and the same length as the subjective vehicle. Fig. 19 shows the final established target parking position. Fig. 20 shows a case when the free parking space is located between the vehicle and the pillar, and the outward border is determined by subreference corner projection. V. E XPERIMENTAL RESULTS The authors installed a scanning laser radar (SICK LD-OEM) on the left side of the experimental vehicle s rear end, as shown in Fig. 21. A brief specification of this sensor is as follows: FOV is 360, angular resolution is 0.125, range resolution is 3.9 mm, maximum range is 250 m, data interface is controller area network (CAN), and laser class is 1 (eye safe). The authors installed two cameras for the rear and side views, respectively, as shown in Fig. 21, to record the experimental situation. These cameras were only used for analysis. Experiments were focused on situations that the earliermentioned systems could not solve, including daytime/ nighttime, outdoors/indoors, and conditions affected by the sun. Fig. 23. Case when range data are disconnected and noisy. We tested our system in 112 situations and confirmed that our system was able to designate target parking position at the desired location in 110 situations. Therefore, the recognition rate is 98.2%. The average processing time on a PC with 1-GHz operation frequency is ms. To show the feasibility of the system, typical situations are listed in the subsequent paragraphs illustrated by the rearview image. The designated target parking position is depicted by a rectangle on range data. Fig. 22 shows an outdoor situation in cloudy weather. The acquired image was dark overall, and the cloud image reflected on the vehicle surface made vision-based feature matching difficult. However, the recognition results showed that the proposed system successfully designated target parking position and was not affected by bad weather conditions. It is noteworthy that although the range data of the vehicle located on the right side of the free parking space were severely distorted near the vertex because of a headlamp, the developed round corner detection could precisely recognize the round corner, irrespective of this condition.

14 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 419 Fig. 24. Case when the adjacent vehicles have different depth position, and there are various objects around the free parking space. Fig. 25. Outdoor parking lot in cloudy weather surrounded by various objects. Fig. 23 shows that the range data located on the left side of the free parking space were noisy and disconnected near the tires. As the proposed round corner detection only used the front cluster of the vehicle s range data, it could properly recognize the round corner. Furthermore, although two vehicles adjacent to the free parking space had considerably different locations in depth, the proposed outward border method established target parking position at a reasonable location in depth. Figs. 24 and 25 show cases wherein there are various objects around the free parking space. Objects such as a building, stairs, a tree, and a flower garden distress image understanding and impart extra complexity to vision-based object recognition. In particular, the repetitive patterns of the building, fans, windows, and trees confuse vision-based feature matching. Furthermore, 3-D information belonging to objects unrelated to free parking space causes failure of model-based vehicle recognition. The proposed method was able to focus on the vehicle and the pillar by ignoring range data clusters with high fitting errors in the corner detection phase. Fig. 26 shows a situation against the sun, which has been one of the most difficult challenges to vision-based methods. The image captured from the camera was saturated to white in some areas and black in others, owing to the sun s glare. Furthermore, the sun made strong reflections on the vehicle s surface, which would change the shape and location with respect to viewing position. Even in such a situation, the proposed method had no problem because the scanning laser radar was not affected by the visual conditions, and the resultant range data were not degraded. It is noteworthy that although the vehicle located on the right side of the free parking space had a round front end, the proposed round corner detection successfully recognized it. Figs. 27 and 28 show the results of target parking position designation when a viable free parking space was located between the vehicle and the pillar in an underground parking lot. Figs. 27 and 28 correspond to the free parking space between the vehicle and the pillar and between the pillar and the vehicle, respectively, in an underground parking lot. Even in such a case, the proposed system used the same algorithm and successfully designated target parking positions. Fig. 29 shows the case wherein the adjacent vehicle in a dark underground parking lot was black. In such a case, vision-based feature detection of the black vehicle would be very hard, and the light stripe detection of the light projection method was error prone. In particular, because a scanning laser radar was installed on the left side of the subjective vehicle s rear end, the incident angle between the side surface of the right vehicle and

15 420 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 26. Case against the sun and vehicle with round front end. Fig. 27. Free parking space between vehicle and pillar in underground parking lot. Fig. 28. Free parking space between pillar and vehicle in underground parking lot. the scanning laser radar was very large. However, the scanning laser radar could detect useful range data that was sufficient for the recognition of free parking space. Figs. 30 and 31 show cases wherein one of the two parked vehicles is a light truck and the other is a truck, respectively. As the vehicles adjacent to the free parking space can belong to

16 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 421 Fig. 29. Case with black vehicle in underground parking lot. Fig. 30. Free parking space between a light truck and a van. Fig. 31. Free parking space between a sedan and a large truck. The background includes a flower garden and apartment with repetitive pattern. various vehicle types such as sedan, sport utility vehicle, van, light truck, and truck, vision-based model matching cannot help being complicated. The proposed system could designate target parking position because it does not consider the appearance of the vehicle. Furthermore, as every large truck must have a guard rail installed on its lower part for safety reasons,

17 422 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Fig. 32. One of two failed situations. As the laser beam direction was almost perpendicular to the normal direction of the parked vehicle s front, the corresponding range data could not be acquired. the proposed system can recognize the contour of a large truck. Fig. 32 shows one of two failed cases. The major reason why the proposed system failed to designate the target parking space in this case was the dissatisfaction of the basic assumption that the main reference corner appeared as an L shape in the range data. The arrow from the coordinate system origin depicts the ray direction from the scanning laser radar to the main reference corner, and the dotted ellipse depicts the region without range data corresponding to the front of the parked vehicle. As the incident angle of the laser beam upon the front was almost 90, the scanning laser radar could acquire no range data of the surface. However, such a situation is very rare; only two cases occurred during 112 test situations. VI. CONCLUSION This paper has proposed a novel target parking position designation method, recognizing the free parking space between parked vehicles in perpendicular parking situations using range data from scanning laser data. Rectangular corner detection and round corner detection could efficiently recognize the location of vehicles. Moreover, the free parking space conditions, which have been checked over by searching objects in the reverse direction of two corner side s directions, proved to be simple and robust. Because the proposed method uses only 1-D range data, it requires less memory and computing power compared with methods using 2-D or higher dimensional range data like grid maps. Through experiments, it is confirmed that the proposed method is able to successfully designate target parking positions, even in situations when other methods may fail, such as daytime/nighttime, indoors/outdoors, against the sun, vehicles in black, and when the vehicle is in a slanted position. The proposed method is able to handle parallel parking situations. In such situations, a parking aid system can use the same corner detection it uses in perpendicular parking situations. It just needs to change the distance threshold of free parking space conditions and the orientation of the target parking position at the final stage. As the ultrasonic sensor already shows acceptable performance in parallel parking situations, in this paper, the authors focused on the perpendicular parking situation that was hard for the ultrasonic sensor to handle. A major disadvantage of the proposed system might be the expensive price of the sensor. However, a solution can be expected in the near future. Recently, a scanning laser radar company has announced that the cost of scanning laser radar will rapidly decrease until prices reach C380 in 2010 [46]. Furthermore, if the scanning laser radar can integrate multiple system functions into one system, the system can replace multiple sensors and multiple electronic control units (ECUs) with one single sensor and a single ECU. Consequently, the total price of the whole system will be lower, and the scanning laser radar-based system will become a practical solution, even from the economic viewpoint [43], [47], [48]. Scanning laser radar for the integration of multiple system functions have already been developed and are now on the market [49]. Furthermore, because range data from scanning laser radar are more reliable and 1-D, the processing unit managing the range data from scanning laser radar will be simpler and cheaper than processing units managing complex vision data and millimeterwave radar signals. Therefore, low-cost processing units and the convenience of algorithm development are expected to compensate for the high cost of scanning laser radar, to some extent. Future works include the following: 1) mosaicking of range data acquired while the subjective vehicle approaches the initial position, which could solve the failed situations because range data from various perspectives are supposed to contain the range data of unseen surfaces and 2) using range data during the parking maneuver to enhance the accuracy of self-localization and to detect potential hazards. REFERENCES [1] R. Frank, Sensing in the Ultimately Safe Vehicle. Warrendale, PA: SAE. SAE Paper [2] T. Tellem, Top 10 High-Tech Car Safety Technologies, Jul. 23, [Online]. Available: article.html [3] Y. Kageyama, Look, No Hand! New Toyota Parks Itself, Jan. 14, [Online]. Available:

18 JUNG et al.: SCANNING LASER RADAR-BASED TARGET POSITION DESIGNATION FOR PARKING AID SYSTEM 423 [4] T. Moran, Self-parking technology hits the market, Automot. News, vol. 8, no. 6227, p. 22, Oct [5] T. Moran and J. Meiners, The arrival of problem-free parking, Automot. News Eur., vol. 11, no. 24, p. 8, Nov [6] M. Furutani, Obstacle Detection Systems for Vehicle Safety. Warrendale, PA: SAE. SAE Paper [7] S. Hiramatsu, A. Hibi, Y. Tanaka, T. Kakinami, Y. Iwata, and M. Nakamura, Rearview Camera Based Parking Assist System With Voice Guidance. Warrendale, PA: SAE. SAE Paper [8] N. Ozaki, Overhead-view surveillance system for driving assistance, in Proc. 12th World Congr. Intell. Transp. Syst., Nov [9] H. Shimizu and H. Yanagawa, Overhead view parking support system, DENSO Tech. Rev., vol. 11, no. 1, [10] C. Deacon, New Mercedes CL Class: Parking Assistance Technology, Aug. 29, [Online]. Available: [11] M. Wada, X. Mao, H. Hashimoto, M. Mizutani, and M. Saito, ican: Pursuing technology for near-future ITS, IEEE Intell. Syst.,vol.19,no.1, pp , Jan./Feb [12] M. Wada, K. S. Yoon, and H. Hashimoto, Development of advanced parking assistance system, IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 4 17, Feb [13] W. C. Lee and T. Bertram, Driver centered design of an advanced parking assistance, in Proc. 5th Eur. Congr. Exhib. Intell. Transp. Syst. Services, Jun. 1 3, [14] Y. Tsuruhara, Honda develops park assist system, Nikkei Automot. Technol., Sep. 29, [15] J. Pohl, M. Sethsson, P. Degerman, and J. Larsson, A semi-automated parallel parking system for passenger cars, Proc. Inst. Mech. Eng., D, J. Automobile Eng., vol. 220, no. 1, pp , Jan [16] C. Heilenkötter, N. Höver, P. Magyar, T. Ottenhues, T. Seubert, and J. Wassmuth, The consistent use of engineering methods and tools, Auto Technol., vol. 6, pp , [17] H. G. Jung, C. G. Choi, P. J. Yoon, and J. Kim, Semi-automatic parking system recognizing parking lot markings, in Proc. 8th Int. Symp. AVEC, Aug , 2006, pp [18] C. G. Choi, D. S. Kim, H. G. Jung, and P. J. Yoon, Stereo Vision Based Parking Assist System. Warrendale, PA: SAE. SAE Paper [19] H. G. Jung, C. G. Choi, D. S. Kim, and P. J. Yoon, System configuration of intelligent parking assistant system, in Proc. 13th World Congr. Intell. Transp. Syst. Services, Oct. 8 12, [20] A. Schanz, Fahrerassistenz Zum Automatischen Parken, Jul. 23, [Online]. Available: php?idcat=74&idart=381 [21] T. Dawson, Safe Parking Using the BMW Remote Park Assist, Aug. 18, [Online]. Available: [22] H. G. Jung, C. G. Choi, P. J. Yoon, and J. Kim, Novel user interface for semi-automatic parking assistance system, in Proc. 31st FISITA World Automotive Congr., Oct , [23] J. Xu, G. Chen, and M. Xie, Vision-guided automatic parking for smart car, in Proc. IEEE Intell. Vehicle Symp., Oct. 3 5, 2000, pp [24] Y. Tanaka, M. Saiki, M. Katoh, and T. Endo, Development of image recognition for a parking assist system, in Proc. 13th World Congr. Intell. Transp. Syst. Services, Oct. 8 12, [25] H. G. Jung, D. S. Kim, P. J. Yoon, and J. Kim, Structure analysis based parking slot marking recognition for semi-automatic parking system, in Structural, Syntactic, and Statistical Pattern Recognition, vol New York: Springer-Verlag, Aug. 2006, pp [26] H. G. Jung, D. S. Kim, P. J. Yoon, and J. Kim, Parking slot markings recognition for automatic parking assist system, in Proc. IEEE Intell. Vehicles Symp., Jun , 2006, pp [27] H. Satonaka, M. Okuda, S. Hayasaka, T. Endo, Y. Tanaka, and T. Yoshida, Development of parking space detection using an ultrasonic sensor, in Proc. 13th World Congr. Intell. Transp. Syst. Services, Oct. 8 12, [28] P. Degerman, J. Pohl, and M. Sethson, Hough Transform for Parking Space Estimation Using Long Range Ultrasonic Sensors. Warrendale, PA: SAE. SAE Paper [29] P. Degerman, J. Pohl, and M. Sethson, Ultrasonic Sensor Modeling for Automatic Parallel Parking Systems in Passenger Cars. Warrendale, PA: SAE. SAE Paper [30] S. Görner and H. Rohling, Parking lot detection with 24 GHz radar sensor, in Proc. 3rd Int. Workshop Intell. Transp., Mar , [31] A. Hashizume, S. Ozawa, and H. Yanagawa, An approach to detect vacant parking space in a parallel parking area, in Proc. 5th Eur. Congr. Exhib. Intell. Transp. Syst. Services, Jun. 1 3, 2005, pp [32] K. Fintzel, R. Bendahan, C. Vestri, and S. Bougnoux, 3-D vision system for vehicle, in Proc. IEEE Intell. Vehicle Symp., Jun. 9 11, 2003, pp [33] K. Fintzel, R. Bendahan, C. Vetsri, and S. Bougnoux, 3-D parking assistant system, in Proc. IEEE Intell. Vehicles Symp., Jun , 2004, pp [34] C. Vestri, S. Bougnoux, R. Bendahan, K. Fintzel, S. Wybo, F. Abad, and T. Kakinami, Evaluation of a vision-based parking assistance system, in Proc. 8th Int. IEEE Conf. Intell. Transp. Syst., Sep , 2005, pp [35] C. Vestri, S. Bougnoux, R. Bendahan, K. Fintzel, S. Wybo, F. Abad, and T. Kakinami, Evaluation of a point tracking vision system for parking assistance, in Proc. 12th World Congr. ITS, Nov. 6 10, [36] J. K. Shur, K. Bae, J. Kim, and H. G. Jung, Free parking space detection using optical flow-based Euclidean 3-D reconstruction, in Proc. IAPR Conf. Mach. Vis. Appl., May 16 18, 2007, pp [37] N. Kaempchen, U. Franke, and R. Ott, Stereo vision based pose estimation of parking lots using 3-D vehicle models, in Proc. IEEE Intell. Vehicle Symp., Jun , 2002, vol. 2, pp [38] H. G. Jung, D. S. Kim, P. J. Yoon, and J. Kim, 3-D vision system for the recognition of free parking site location, Int. J. Automot. Technol., vol. 7, no. 3, pp , May [39] H. G. Jung, D. S. Kim, P. J. Yoon, and J. Kim, Light stripe projection based parking space detection for intelligent parking assist system, in Proc. IEEE Intell. Vehicle Symp., Jun , 2007, pp [40] A. Schanz, A. Spieker, and K.-D. Kuhnert, Autonomous parking in subterranean garages A look at the position estimation, in Proc. IEEE Intell. Vehicle Symp., Jun. 9 11, 2003, pp [41] U. Regensburger, A. Schanz, and T. Stahs, Three-Dimensional Perception of Environment, U.S. Patent , Jun. 12, [42] C. T. M. Keat, C. Pradalier, and C. Laugier, Vehicle detection and car park mapping using laser scanner, in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Aug. 2 6, 2005, pp [43] H. G. Jung, Y. H. Cho, P. J. Yoon, and J. Kim, Integrated side/rear safety system, in Proc. 11th Eur. Automotive Congr., May 30 Jun. 1, [44] K. C. Fuerstenberg, D. T. Linzmeier, and K. C. J. Dietmayer, Pedestrian recognition and tracking of vehicles using a vehicle based multilayer laser scanner, in Proc. 10th World Congr. Intell. Transp. Syst., Nov [45] R. Halif and J. Flusser, Numerically Stable Direct Least Squares Fitting of Ellipses. Prague, Czech Republic: Dept. Softw. Eng., Charles Univ., [46] Ibeo, Serial Price, Jul. 23, [Online]. Available: [47] Laser or Radar? Battle for Supremacy in Vision Safety Systems Heats Up, Automot. News, Jun. 25, [48] R. Schulz and K. Fürstenberg, Laser scanner for multiple applications in passenger cars and trucks, in Proc. 10th Int. Conf. Microsyst. Automotive Appl, Apr. 2006, pp [49] Ibeo, Passenger Car Integration, Jul. 23, [Online]. Available: Ho Gi Jung (M 05) received the B.S. and M.S. degrees in electronic engineering from Yonsei University, Seoul, Korea, in 1995 and 1997, respectively. He has been working toward the Ph.D. degree in computer vision, under the supervision of Prof. J. Kim, with the School of Electrical and Electronic Engineering, Yonsei University, since He has been with the MANDO Corporation Global R&D Headquarters, Yongin, Korea, since He developed environment recognition algorithms for the lane departure warning system (LDWS) and adaptive cruise control (ACC) from 1997 to He developed the electronic control unit (ECU) and the embedded software for the electrohydraulic braking (EHB) system from 2000 to Since 2004, he has developed environment recognition algorithms for the intelligent parking assist system (IPAS), collision warning and avoidance, and the active pedestrian protection system (APPS). His interests are automotive vision, embedded software development, the driver assistant system (DAS), and the active safety vehicle (ASV). Mr. Jung is a member of the International Society of Automotive Engineering (SAE) and the International Society for Optical Engineers (SPIE), which is an international society advancing an interdisciplinary approach to the science and application of light. He is a member of the Institute of Electronic Engineers of Korea (IEEK) and the Korean Society of Automotive Engineering (KSAE).

19 424 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 9, NO. 3, SEPTEMBER 2008 Young Ha Cho received the B.S. degree in electrical and electronic engineering from Kyungpook National University, Daegu, Korea, and the M.S. degree in electrical and electronic engineering from the Pohang University of Science and Technology (POSTECH), Pohang, Korea. He is a Research Engineer with the MANDO Corporation Global R&D Headquarters, Yongin, Korea. His research includes the driver assistant system (DAS), particularly blind spot detection and the parking assist system. Pal Joo Yoon received the B.S., M.S., and Ph.D. degrees in mechanical engineering from Hanyang University, Seoul, Korea, in 1987, 1989, and 2000, respectively. Since 1988, he has been with the MANDO Corporation, Yongin, Korea. His main job has been automotive control system design. Currently, he is the Head of the System R&D Center. His research interests are the driver assistant system (DAS), autonomous driving, and automotive control system design. Dr. Yoon is a member of the International Society of Automotive Engineering (SAE) and the Korean Society of Automotive Engineering (KSAE). Jaihie Kim received the B.S. degree in electronic engineering from Yonsei University, Seoul, Korea, in 1979 and the M.S. degree in data structures and the Ph.D. degree in artificial intelligence from Case Western Reserve University, Cleveland, OH, in 1982 and 1984, respectively. Since 1984, he has been a Professor with the School of Electrical and Electronic Engineering, Yonsei University, where he is currently the Director of the Biometric Engineering Research Center. His research interests include biometrics, computer vision, and pattern recognition. Prof. Kim is currently the Chairman of the Korean Biometric Association and the President of the Institute of Electronic Engineers of Korea (IEEK).

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

차세대지능형자동차를위한신호처리기술 정호기

차세대지능형자동차를위한신호처리기술 정호기 차세대지능형자동차를위한신호처리기술 008.08. 정호기 E-mail: hgjung@mando.com hgjung@yonsei.ac.kr 0 . 지능형자동차의미래 ) 단위 system functions 운전자상황인식 얼굴방향인식 시선방향인식 졸음운전인식 운전능력상실인식 차선인식, 전방장애물검출및분류 Lane Keeping System + Adaptive Cruise

More information

Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System

Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Ho Gi Jung 1, 2, Dong Suk Kim 1, Pal Joo Yoon 1, and Jaihie Kim 2 1 MANDO Corporation Central R&D Center, Advanced

More information

Sensor Fusion-Based Parking Assist System

Sensor Fusion-Based Parking Assist System Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:

More information

Semi-automatic Parking System Recognizing Parking Lot Markings

Semi-automatic Parking System Recognizing Parking Lot Markings Proceedings of AVEC 06 The 8 th International Symposium on Advanced Vehicle Control, August 20-24, 2006, Taipei, Taiwan AVEC060186 Semi-automatic Parking System Recognizing Parking Lot Markings Ho Gi Jung

More information

Stereo Vision Based Advanced Driver Assistance System

Stereo Vision Based Advanced Driver Assistance System Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496

More information

NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM

NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM F2006D130T NOVEL USER INTERFACE FOR SEMI-AUTOMATIC PARKING ASSISTANCE SYSTEM 1,2 Jung, Ho Gi, 1 Kim, Dong Suk *, 1 Yoon, Pal Joo, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea, 2 Yonsei University,

More information

Sherlock 7 Technical Resource. Search Geometric

Sherlock 7 Technical Resource. Search Geometric Sherlock 7 Technical Resource DALSA Corp., Industrial Products (IPD) www.goipd.com 978.670.2002 (U.S.A.) Document Revision: September 24, 2007 Search Geometric Search utilities A common task in machine

More information

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR Bijun Lee a, Yang Wei a, I. Yuan Guo a a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System

Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System Proceedings of the 2007 IEEE Intelligent Vehicles Symposium Istanbul, Turkey, June 13-15, 2007 ThF1.1 Light Stripe Projection based Parking Space Detection for Intelligent Parking Assist System Ho Gi Jung,

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

OBJECT detection in general has many applications

OBJECT detection in general has many applications 1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT 3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates.

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates. Plot 3D Introduction Plot 3D graphs objects in three dimensions. It has five basic modes: 1. Cartesian mode, where surfaces defined by equations of the form are graphed in Cartesian coordinates, 2. cylindrical

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

We start by looking at a double cone. Think of this as two pointy ice cream cones that are connected at the small tips:

We start by looking at a double cone. Think of this as two pointy ice cream cones that are connected at the small tips: Math 1330 Conic Sections In this chapter, we will study conic sections (or conics). It is helpful to know exactly what a conic section is. This topic is covered in Chapter 8 of the online text. We start

More information

Change detection using joint intensity histogram

Change detection using joint intensity histogram Change detection using joint intensity histogram Yasuyo Kita National Institute of Advanced Industrial Science and Technology (AIST) Information Technology Research Institute AIST Tsukuba Central 2, 1-1-1

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

AUTOMATIC parking systems consist of three components:

AUTOMATIC parking systems consist of three components: 616 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 59, NO. 2, FEBRUARY 2010 Uniform User Interface for Semiautomatic Parking Slot Marking Recognition Ho Gi Jung, Member, IEEE, Yun Hee Lee, Member, IEEE,

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Digital Image Processing Fundamentals

Digital Image Processing Fundamentals Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to

More information

MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM

MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM J. KSIAM Vol.14, No.1, 57 66, 2010 MATHEMATICAL IMAGE PROCESSING FOR AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM SUNHEE KIM, SEUNGMI OH, AND MYUNGJOO KANG DEPARTMENT OF MATHEMATICAL SCIENCES, SEOUL NATIONAL

More information

Scanning Real World Objects without Worries 3D Reconstruction

Scanning Real World Objects without Worries 3D Reconstruction Scanning Real World Objects without Worries 3D Reconstruction 1. Overview Feng Li 308262 Kuan Tian 308263 This document is written for the 3D reconstruction part in the course Scanning real world objects

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera [10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,

More information

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing

More information

Exploring Curve Fitting for Fingers in Egocentric Images

Exploring Curve Fitting for Fingers in Egocentric Images Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing

More information

An Algorithm for 3D Vibration Measurement Using One Laser Scanning Vibrometer

An Algorithm for 3D Vibration Measurement Using One Laser Scanning Vibrometer 6th European Worshop on Structural Health Monitoring - Poster 3 An Algorithm for 3D Vibration Measurement Using One Laser Scanning Vibrometer D. KIM, J. KIM and K. PARK ABSTRACT 3D vibration measurement

More information

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY 1 K. Sravanthi, 2 Mrs. Ch. Padmashree 1 P.G. Scholar, 2 Assistant Professor AL Ameer College of Engineering ABSTRACT In Malaysia, the rate of fatality due

More information

Accurately measuring 2D position using a composed moiré grid pattern and DTFT

Accurately measuring 2D position using a composed moiré grid pattern and DTFT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 Accurately measuring 2D position using a composed moiré grid pattern and DTFT S. Van

More information

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University

More information

Using Edge Detection in Machine Vision Gauging Applications

Using Edge Detection in Machine Vision Gauging Applications Application Note 125 Using Edge Detection in Machine Vision Gauging Applications John Hanks Introduction This application note introduces common edge-detection software strategies for applications such

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Pre-Calculus Guided Notes: Chapter 10 Conics. A circle is

Pre-Calculus Guided Notes: Chapter 10 Conics. A circle is Name: Pre-Calculus Guided Notes: Chapter 10 Conics Section Circles A circle is _ Example 1 Write an equation for the circle with center (3, ) and radius 5. To do this, we ll need the x1 y y1 distance formula:

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Pattern Feature Detection for Camera Calibration Using Circular Sample

Pattern Feature Detection for Camera Calibration Using Circular Sample Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER 2012 411 Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems Ricardo R. Garcia, Student

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Designing Applications that See Lecture 7: Object Recognition

Designing Applications that See Lecture 7: Object Recognition stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Triangular Mesh Segmentation Based On Surface Normal

Triangular Mesh Segmentation Based On Surface Normal ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January 2002, Melbourne, Australia. Triangular Mesh Segmentation Based On Surface Normal Dong Hwan Kim School of Electrical Eng. Seoul Nat

More information

Charting new territory: Formulating the Dalivian coordinate system

Charting new territory: Formulating the Dalivian coordinate system Parabola Volume 53, Issue 2 (2017) Charting new territory: Formulating the Dalivian coordinate system Olivia Burton and Emma Davis 1 Numerous coordinate systems have been invented. The very first and most

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Automatic free parking space detection by using motion stereo-based 3D reconstruction

Automatic free parking space detection by using motion stereo-based 3D reconstruction Automatic free parking space detection by using motion stereo-based 3D reconstruction Computer Vision Lab. Jae Kyu Suhr Contents Introduction Fisheye lens calibration Corresponding point extraction 3D

More information

Lecture Outlines Chapter 26

Lecture Outlines Chapter 26 Lecture Outlines Chapter 26 11/18/2013 2 Chapter 26 Geometrical Optics Objectives: After completing this module, you should be able to: Explain and discuss with diagrams, reflection and refraction of light

More information

Name. Center axis. Introduction to Conic Sections

Name. Center axis. Introduction to Conic Sections Name Introduction to Conic Sections Center axis This introduction to conic sections is going to focus on what they some of the skills needed to work with their equations and graphs. year, we will only

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Chapter - 2: Geometry and Line Generations

Chapter - 2: Geometry and Line Generations Chapter - 2: Geometry and Line Generations In Computer graphics, various application ranges in different areas like entertainment to scientific image processing. In defining this all application mathematics

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH 2011 International Conference on Document Analysis and Recognition Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH Kazutaka Takeda,

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information

Tutorial: Using Tina Vision s Quantitative Pattern Recognition Tool.

Tutorial: Using Tina Vision s Quantitative Pattern Recognition Tool. Tina Memo No. 2014-004 Internal Report Tutorial: Using Tina Vision s Quantitative Pattern Recognition Tool. P.D.Tar. Last updated 07 / 06 / 2014 ISBE, Medical School, University of Manchester, Stopford

More information