Automated Correction and Updating of Road Databases from High-Resolution Imagery
|
|
- Amie Bryan
- 6 years ago
- Views:
Transcription
1 Automated Correction and Updating of Road Databases from High-Resolution Imagery M.-F. Auclair Fortier 1, D. Ziou 1, C. Armenakis 2 and S. Wang 1 1 Département de mathématiques 2 Center for et informatique Topographic Information Université de Sherbrooke Geomatics Canada 2500 Blv Université, Sherbrooke 615 Booth Str, Ottawa Québec, Canada, J1K 2R1 Ontario, Canada, K1A 0E9 - Technical Report No Abstract Our work addresses the correction and update of road map data from georeferenced aerial images. This task requires the solution of two underlying problems: a) the weak positional accuracy of the existing road location, and b) the detection of new roads. To correct the position of the existing road network location from the imagery, we use an active contour ("snakes") optimization approach, with a line enhancement function. The initialization of the snakes is based on a priori knowledge derived from the existing vector road data coming from the National Topographic Database of Geomatics Canada, and from line junctions computed from the image by a new detector developed for this application. To generate hypothesis for new roads, a road following algorithm is applied, starting from the line intersections, which are already in the existing road network. Experimental results are presented to validate the approach and to demonstrate the interest of using line junctions in this kind of applications.
2 1 Introduction Road detection is a time consuming operation when performed manually. Fortunately, aerial imagery tends to facilitate this process, but complete automatization is still far off with the methods used to date. Like edge detection [16, 1] road detection is a type of feature extraction which is almost always context-dependent. In our case, we use high-resolution georeferenced images, that is, the geographic position of each pixel is known, and the existing road topographic network from the National Topographic Database of Geomatics Canada. There are two problems with this database : 1) it is not accurate enough (Figure 1) and 2) new roads are not included. The aim of our work is to automatically correct and update this road network. Figure 1: Example of imprecision in the road database. White lines represent road location from database. Much works has been done on automating road detection [2, 3, 6, 8, 9, 10, 11, 14]. Among the methods developed, we can mention particularly the work of Klang [13]. He developed a method for detecting changes between an existing road database and a satellite image. First he used the road database to initialize an optimization process, using a snake approach to correct road location. Then, he ran a line-following process using a statistical approach to detect new roads, starting from the existing network. We have noticed that none of the reviewed methods make use of junctions in the image. These junctions are generally reliable information and since roads form networks, they are very relevant in this context. There are significant differences between our approach and Klang s. First, we add line junctions to his correction scheme to improve the matching between the road database and the image. Second, we generate hypothesis for new roads by following lines starting from line junctions near the known road network. We describe our approach in Section 2. In Section 3, we present line and line-junction detection. In Section 5, we describe the correction of road location. In Section 6, we present a hypothesis generation scheme for new roads. In Section 7, we present results obtained from a georeferenced image and a road database, both supplied by Geomatics Canada. Finally, the conclusions are discussed in Section 8. 2
3 2 Approach Geometric correction of the database is the first step in updating road maps. Because the road database has been obtained from various sources including scanning road maps, the location of roads is not precise. However, this information allows a good distinction between image features which are roads and those which are not. In aerial images, roads are generally long, thin features. Thus we want to find lines which are near the initialization location provided by the road database. This prompts us to use active contour models (snakes), which are optimization curves requiring an initialization close to the solution. To bring this initialization closer to the solution, we match each intersection in the road database with at most one junction found in the image. Thus, we re-localize road segments according to the difference between database intersections and image junctions. After re-localization, we apply a line optimization process, using the snake approach. The second step in updating road maps is the addition of new roads. We know that roads are built in networks and that new roads are linked with old ones. We thus generate hypothesis for new roads by following lines from line junctions near the known network. The algorithm can be summarized by these following six steps: 1. Reduction of the image resolution, when it is not low enough to have roads as lines, by resampling the image. 2. Extraction of lines using Steger s algorithm [15], which is presented in Section Using the results of the line detection step, we find location of line junctions (Section 4). 4. Matching of extremities of road segments with line junctions and displacement of road segments according to the difference between their extremities and image junctions (Section 5.1). 5. Re-localization of each road segment individually, using the snake approach. 6. Line following starting from junctions placed near the existing road network to generate hypothesis for new roads. 3 Detection of Lines from the Image In Aerial and Satellite images, roads are long, thin elements. Usually, they are bright lines against a dark background. To detect these, we use Steger s line detector which can be summarized as follows [15] : A bright line is a maximum in the gray-level image in a particular direction. Many line detection algorithms have been proposed [16]. The basic idea behind the one we are using is that the second directional derivative of a bright line profile on a dark background has a negative minimum and its first directional derivative is 0 (Figure 2). In 2D, the direction that yields maximum curvature of the function has to be found. Let f (x; y) be a discrete image. This direction can be determined by computing eigenvalues and eigenvectors of the Hessian matrix : H(x; y) =» fxx f xy ; f xy f yy 3
4 t t t DMI, Université de Sherbrooke Technical Report N. 241 (a) Line profile (b) First derivative (c) Second derivative Figure 2: where f xx, f xy and f yy are the second order derivatives of f (x; y). The direction giving maximum curvature is the direction perpendicular to the line. This is defined as the direction of the eigenvector corresponding to the eigenvalue with maximum absolute value. If this eigenvalue is negative, we have a bright line on a dark background and if it is positive, we have a dark line on a light background. The eigenvalues of H(x; y) can easily be calculated by : (f xx + f yy ) ± = The normalized direction perpendicular to the line is ~n =(n x ;n y )= 8 >< >: q ( max f yy ;f xy ) q ( max f yy ) 2 + fxy 2 (f xx f yy ) 2 +4fxy 2 : 2 (f xy ; max f xx ) qf 2 xy +( max f xx ) 2 elsewhere if f xy = max f xx =0 (1) where max is the eigenvalue with maximum absolute value at point (x; y). Using this eigenvalue, we can define a measure of line plausibility. That is, when we search for dark lines, the line plausibility is max and when we search for bright lines, the line plausibility is max. Working now in 1D, to find line location along the profile direction, we prefer to use a polynomial approach than just a non-maxima suppression. Such approach allows to find sub-pixel position of the line. Thus we find the positions where the first derivative of the Taylor polynomial in direction ~n is 0 and its second derivative has a negative minimum. Let ~p @ 2 2! ~n = f x n x + f y n y f xx n 2 x +2f xyn x n y + f yy n 2 y where f x and f y are the first order partial derivatives of f (x; y). A pixel (x; y) is a line point if ~p 2 [ 1=2; 1=2] [ 1=2; 1=2] and max is used to select salient bright lines. When the line s width is greater than one pixel, we can use a suppression of non-minima (we are looking for bright lines) on max in the direction ~n. For the computation of derivatives, we use convolutions with derivatives of a Gaussian of parameter ff. Figure 3 presents an image, its line plausibility and the lines obtained at ff =1. ~n; 4
5 (a) (b) (c) Figure 3: a) Original image. b) Bright-line plausibility ( max )(ff = 1). c) Detected lines. 4 Detection of Line Junctions In the context of road detection in aerial images, we define junctions as line intersections. Image junctions are very useful features, particularly in solving matching problems, because they are generally reliable, stable elements. Unfortunately, we have noticed that they are not used for road detection. One reason is that, to the best of our knowledge, before the beginning of this project, no line-junction detector was existing. We use a line-junction detector developed in our Computer Vision Laboratory, based on a measure of curvature between the direction vectors of the line pixels within a given neighborhood [5]. This detector also finds line terminations. This measure is estimated by computing the mean of dissimilarity between the orientation vector of the line pixels : MX MX 1 fi( x; y)[1 j~n(x; y) ~n(x + x; y + y)j] N x= M y= M where ~n(x; y) is the normalized line direction at point (x; y) from Equation 1, is the dot product, fi( x; y) =ρ e ( x 2 + y 2 ) if a line exists between (x; y) and (x + x; y + y) 0 otherwise and N the number of points (x + x; y + y) such that fi 6= 0. The junction is localized in a high curvature region using a non-maxima suppression. For more details, see the report on this detector [5]. Figure 4 presents the results of the line-junction detector. 5 Correction of Road Location As we will see in more details in section 5.2, the correction of road location involves two steps. The first is re-localization of the known roads to bring them closer to the final snake s solution. The second step concerns the snake process itself. These two steps are presented in the next two sections. 5
6 Figure 4: Line junctions. 5.1 Matching the Road Database with Image The active contour model is an iterative process which needs an initialization close to the final solution. This initialization must fall within the attraction zone of the snake. We propose to use line junctions to re-localize road location from the road database, because they indicate the position of road intersections in the image. For this purpose, we use a correlation measure between certain points in the road database and line junctions. The road database provided by the Center for Topographic Information of Geomatics Canada is made up of lists of georeferenced points defining line segments. We call these line segments vectors. These points are in georeferenced coordinates, thus we have to convert them in coordinates relative to the beginning of the image. Then, we group the vectors into road segments in order to reduce the number of snakes. These road segments must be as straight as possible because this allows us to add a constraint of high rigidity to the snake. This high rigidity will help avoid noise and shadowing problems, because the optimized contour does not follow small intensity variations caused by these phenomenon s. We link two vectors if they are adjacent, if the cosine of the angle between them is below a threshold (in our experiments, we always set this threshold to cos 138 ffi ) and if there is no other vector starting at the extremity common to the two vectors. Thus, we find road segment extremities first, where there are more than two vectors ending at a single point; second, where the angle between two vectors is lower than a given value; and third, at terminations. A road segment is described by all vectors between its two extremities. Figure 5(a) shows an example of road segments from the database. There are five road segments in this image. Having the list of all road segments, we will match each extremity with a junction, if possible. For this purpose, we compute a correlation measure between the extremity and each junction localized in a N m N m neighborhood around this extremity. Let (x; y) be the extremity and (u; v) be the junction; then, the correlation is: corr(x; y; u; v) = MX m=2 MX m=2 RoadSeg(x + i; y + j)line(u + i; v + j); (2) i= M m=2 j= M m=2 where RoadSeg(x; y) = ρ 1 if (x; y) belongs to a road segment 0 if not: 6
7 (a) (b) (c) Figure 5: a) Road segments. Black squares are extremities. b) Matching between image junctions and road-segment extremities. White squares are junctions matched with extremities, while black squares are either extremities or other junctions. Neighborhood is c) Road segments after re-localizing the two extremities matched with junctions. and line(x; y) is the line plausibility ( max ) as defined in Section 3. We match (x; y) with the junction (u; v) having the highest correlation. Figure 5(b) presents the results of matching between extremities of Figure 5(a) and junctions of Figure 4. We used a M m of 25 in all of our trials. After matching, we re-localize each segment according to the difference between its extremities and the junctions matched. As the differences at the two extremities may not be the same, we move each vector extremity in the road segment according to the difference between the nearest road segment extremity and its matched junction. That means that every vector keeps the same orientation except the one containing the midpoint of the road segment, because its two extremities will move according to the two different extremities of the road segment (Figure 6). Figure 5(c) presents road segments after re-localization. 5.2 Active Contour Models - Snakes As we mentioned earlier, we want to find lines near the initialization provided by the road database. Active contour models or snakes is an optimization process which needs an initialization near the solution [4, 7, 12]. Aerial images can be affected by noise and roads can be partially hidden by shadows of trees, houses or clouds. Snakes manage these problems with constraints on rigidity and elasticity. In the continuous domain, snakes (or active contours) are parametric curves (v(s; t) =(x(s; t); y(s; t)); s2 [0; 1]) that minimize an energy function based on internal and external constraints [4] at time t: E tot = E int + E ext = Z 1 0 ff1 jv 0 (s; t)j 2 + fi 1 jv 00 (s; t)j 2 ds + Z 1 0 max (v(s; t))ds; (3) where E ext is the external energy, E int the internal energy and a regularization parameter. The internal energy represents the snake s capacity to distort. ff 1 ;fi 1 2 R +, v 0 and v 00 are the first and second partial derivatives in s. The term jv 0 (s; t)j 2 influences the snake s length (or tension) and jv 00 (s; t)j 2 influences the curvature (or elasticity) of the snake. The parameter can be absorbed by ff and fi (ff = ff 1 and fi = fi 1 ). 7
8 (x2 + dx1, y2 + dy1) (x2, y2) (x3 + dx2, y3 + dy2) (x4 + dx2, y4 + dy2) (x3, y3) (x4, y4) (x5 + dx2, y5 + dy2) (x5, y5) (dx2, dy2) (x1 + dx1, y1 + dy1) (dx1, dy1) (x1, y1) Midpoint of the road segment Original road segment Modified road segment Figure 6: Example of displacement of a road segment after matching. The external energy represents the image force (f (x; y)) attracting the snake. In our case, this force is the bright-line plausibility (f (x; y) = max (x; y)). Equation 3 is an evolution problem (snake position evolves over time). The solution to this minimization problem is the simplified Euler evolution t) ffv 00 (s; t) +fiv 0000 (s; t) ext(v(s; t)) ; where v 0000 is the fourth derivative in s. The parameter fl represents the viscosity of the curve. The higher the viscosity, the greater the inertia and the slower the evolution of the curve. The discrete version of Equation 4 is obtained by setting a discrete version of the parametric curve at moment t: V t = fv i;t = (x i;t ;y i;t );i = 1::ng, where n is the number on points of the curve. We can approximate the first derivative by: and the second derivative by: The temporal derivative is approximated by: v 0 i;t = v i;t v i 1;t v 00 i;t = v i+1;t 2v i;t + v i 1;t = v i;t v i;t 1 : The solution can be written in the following vectorial form: ffvt 00 + fiv0000 t ext(v t 1 ) : 8
9 At point v i;t,wehave: fl(v i;t v i;t 1) ff(v i+1;t 2v i;t + v i 1;t) +fi(v i+2;t 4v i+1;t +6v i;t 4v i 1;t + v i 2;t) ext(v i;t 1) We can rewrite Equation 5 in matrix (6) (K + fli)v t = flv t ext(v t 1 ) ; where I is the n n identity matrix and K is a n n matrix. The elements of K are determined by Equation 6 and the initial conditions; the snake has free extremities, i. e. v 00 (0;t) = v 000 (0;t) = v 00 (1;t)=v 000 (1;t)=0. Thus K is defined by: fi 2fi fi 0 ::: ::: ::: ::: ::: 0 ff 2fi 2ff +5fi ff 4fi fi 0 ::: ::: ::: ::: 0 fi ff 4fi 2ff +6fi ff 4fi fi 0 ::: ::: ::: 0 0 fi ff 4fi 2ff +6fi ff 4fi fi 0 ::: ::: fl is given by [4]: ::: ::: 0 fi ff 4fi 2ff +6fi ff 4fi fi 0 0 ::: ::: ::: 0 fi ff 4fi 2ff +6fi ff 4fi fi 0 ::: ::: ::: ::: 0 fi ff 4fi 2ff +6fi ff 4fi 0 ::: ::: ::: ::: ::: 0 fi 2fi fi fl p j ; n where is the mean progression of the curve. The initial must be determined by the user. We keep the same until the total energy of the curve increases and then we decrease by a factor of 0:75. We can approximate the total energy of the curve at time t by: E tot;t = nx i=1 Eext (v i;t )+ffjv 0 i;t j2 + fijv 00 i;t j2 To summarize the correction step, for each road segment we do the following: 1. Initialize a snake curve (V 0 ) with the re-localized road segment found in the matching step. Each pixel belonging to the curve is a v i;0. Set t to Compute V t from Equation If the total energy of the curve decreases then increment t and go to step If is not too low, decrease, increment t and go to step 2. Figure 7 shows an example of a snake used to optimize a line plausibility function (ff = 100 and fi =0). For all trials, we used an initial =1. Clearly, the initialization is beside the line. After 18 iterations, the snake is at the center of the line. 9
10 (a) Figure 7: Snakes a) Initialization. Every pixel is a point in V 0. b) After 18 iterations 6 Generation of Hypothesis for New Roads As mentioned earlier, the second problem in updating road database is the detection of new roads. These new roads are also bright lines on a dark background, so the result of the line detector can be used. Our work is based on the supposition that new roads are connected with the existing road network. This hypothesis is realistic because roads are built to allow people to travel from one place to another. For this reason, we consider only lines which are connected to the road network as given in the existing database. The line-following algorithm starts from junctions in a N N neighborhood of the roads, corrected using the snake approach (Figure 8). We follow all lines originating from these (b) Figure 8: Image junctions near the corrected roads (N =7). junctions. We consider each line pixel in a 5 5 window around a junction as a starting pixel for the line-following process (Figure 9). For each line, we proceed as follows : Initially, we take one of the starting pixel (anyone). Let p be the last chosen pixel (we call it the parent). First, we consider three neighbors of this pixel in 10
11 l s l l s j s l l s l l Figure 9: A marked line junction (j), its starting pixels (s) and line pixels (l). the line direction. For example, if the line direction at point p = (p x ;p y ) is in ]ß=8; 3ß=8], the three neighbors to consider are (p x +1;p y +1), (p x ;p y +1)and (p x +1;p y ) (see Table 1). The line direction in p is computed from ~n (Equation 1). Each of these three neighbors which is marked as a line pixel is considered as a child. The next pixel selected is the child that minimizes: j c p j (8) where c is the line direction of the child and p is the mean line directions of the parent respectively. The mean line direction is computed on the last L pixels visited, as follows: a = P L 1 i=0 a i ; L where a i is the line direction of the ith pixel visited before pixel a. In the case where none of the three neighbors are marked as line pixels, we look in a neighborhood further away in the line direction; that is, when the direction of the vector between the parent and the pixel falls in the interval [ p ß=8; p + ß=8], we consider this pixel as a child. In this case, the next pixel is the one minimizing: (1!)jjpt c pt p jj 2 +!j c p j (9) where pt c and pt p are the locations of child and parent, respectively and! 2 [0; 1] is a weighting parameter. We do it iteratively from the beginning with the retained child. If a parent has no child, the line-following process is terminated. If the new road is shorter than a given threshold (S), it is dropped. Let a line junction near the existing network, for each line pixel in a 5 5 neighborhood of this junction, we can summarize the line following algorithm as follows : 1. Set the pixel as parent. 2. Compute the three neighbors in line direction of the parent (see Table 1). If they are line pixels, set them as children. 3. If there is at least one child: (a) Set the child that minimizes Equation 8 as parent. (b) Return to step 2. 11
12 ] ß=8;ß=8] p ]7ß=8; 7ß=8] p ]ß=8; 3ß=8] p ] 7ß=8; 5ß=8] p ]3ß=8; 5ß=8] p ] 5ß=8; 3ß=8] p ]5ß=8; 7ß=8] p ] 3ß=8; ß=8] p Table 1: Range of line directions and neighbors considered. p is the parent pixel and indicates neighbors to verify. 4. If there is no child: (a) Compute further children in the line direction. (b) If there is now at least one child : i. Set the child that minimizes Equation 9 as parent. ii. Return to step If the length of the new line is less than S, then drop it. Figure 10 shows the results of the line-following process with L =10;S =7;! =0:9. Figure 10: New roads found by line-following process with L =10;S =7and! =0:9. 12
13 7 Experimental Results Our algorithm has been implemented in C++ on a Sun platform and in Visual C++ on a Windows platform. It was experimented on various sections of a georeferenced ortho image ( pixels) that we re-sampled in order to reduce the initial spatial resolution by a factor of two. The digital georeferenced ortho image mosaic used has initial ground spatial resolution of 1:5m and is from the Edmonton, Alberta area. It was generated by scanning 1 : scale 1995 aerial photography at 1000ppi (0:025mm) with 8-bit quantization. The Helava/Leica digital photogrammetric workstation was used for the automatic DEM generation, while the exterior orientation parameters for each image were determined through automatic aerotriangulation using control points from the Aerial Survey Database (ASDB). The road vector data were obtained from the National Topographic Database (NTDB). These data were revised and enhanced in 1996 using GPS methods and have an estimated accuracy of about 10m. Figure 11 presents the results obtained at each step of our correction algorithm for an image section. Figure 11(a) is the original image, on which the vectors from the road database are superimposed. Visually, we can see that in many places, vectors do not correspond to roads. Figure 11(b) shows the line plausibility. Figure 11(c) shows the result of line detection, where the scale (ff) is 2 and the line threshold is 20. Figure 11(d) shows junctions (threshold is 0:1) and road-segment extremities, represented by black squares. The white squares are junctions matched with extremities (N m = 13). We note that except for dead ends, there is only one extremity which is not matched (upper part of central curve), because there is no junction in the neighborhood of the extremity. If the nearest junction had been matched, the central segment would have shifted to the right, away from the road. Figure 11(e) shows the initialization after re-localizing road segments. Figure 11(f) presents the final position of the snakes (ff = 100 and fi = 4). We note that the final snake positions are closer to the center of the lines than the original vectors. The mean displacement of road segment points is 2:76 pixels and the variance of this displacement is 4:09. We had a mean number of iterations of 31 per pixel in the snake step, for a total of 641 pixels treated. The results without junction detection (Klang s approach with the same parameters as in Figure 11) are presented in Figure 12. The mean displacement of road segment points is 3:63 and the variance of this displacement is 4:95. The mean number of iterations was 34 per pixel. Comparing Figure 11(f) and 12(b), we can see why the mean displacement in Klang s approach is higher than ours. Some snake sections are displaced further from the road. Roads are better localized with our approach. Figure 13 presents the results for a second image section. Figure 13(a) shows the original image, with road segments from the database and line junctions (the threshold for line detection is 20 and the threshold for line junction detection is 0:15) matched with road segment extremities (N m = 9). Figure 13(b) shows line plausibility (ff = 2:5), and Figure 13(c), the final result of the correction step (ff = 100 and fi = 0). As we can see, the lower right-hand section has been well corrected. In the middle of the image, we have a street in form of U. The right lower part of the U section is badly positioned comparing to the original vectors. We can find the explanation in the line plausibility image. As we can see in Figure 13(b) line plausibility is low and the snake is attracted outside the line. We also observe from the left-hand part of the horizontal segment (lower left, the one which cross the river), that the rigidity of the snake is very useful. Despite the fact that the line detector doesn t give a straight line everywhere in this segment, the result obtained is almost straight. Further along 13
14 this same road, we can see that it lies just beside a dark line on a lighter background. This may be a shadow or something else. The snake is thus attracted by the side of the road instead of the middle. This is a frequent problem in urban regions. The mean displacement of road segment points is 2:26 pixels and the variance of this displacement is 1:47. We had a mean number of iterations of 47 per pixel in the snake step, for a total of 1957 pixels treated. For purpose of comparison, we worked without line junctions as did Klang [13]. We took the point in a neighborhood (same as in Figure 13(a)) where Equation 2 gives maximum correlation. The other parameters are the same. Figure 14 shows the results. The mean displacement of road segment points is 2:35 pixels and the variance of this displacement is 2:27. The mean number of iterations was 39 per pixel. The main difference between the two approaches is the variance of displacement is higher when line junctions are not used. Figure 15 presents the results for a third section of the image. Figure 15(a) shows road segments from the road database and line junctions, superimposed on the original image. The scale parameter ff is 2.5 and the thresholds for line detection and junction detection are 20 and 0:09 respectively. The vertical road is shifted slightly to the left. Figure 15(b) shows the bright-line plausibility, and Figure 15(c), the final result of the correction step (ff = 100 and fi = 0). We can see that the vertical road is better localized. However one intersection is slightly moved toward the bottom and the other is slightly moved toward the top of the image (Figure 17c). From Figure 15(b), the line plausibility is lower in the intersection s location than the neighborhood in the line direction. The snake is then attracted by the highest plausibility. One possible solution to this problem could be to find an application condition for the snake. It could avoid displacing the road segment when the line plausibility is not good enough. We had a mean displacement of 1:68 pixels with a variance of 1:32. It took 22 iterations per pixel, for a total of 908 pixels. Now we will present the results concerning generation of hypothesis for new roads. Figure 15(d) shows the final result of the new-road detection step carried out on Figure 15(a). Figure 16 presents the new-road hypothesis generation step for the image in Figure 11(a). Figure 16(a) shows the line junctions near the known network (N = 7) and Figure 16(b) shows the final result (L = 10 and S =7). We will now briefly discuss the influence of the different parameters. We begin with the scale parameter of the line detector (Figure 17). The scale parameter must be chosen carefully, depending on the width of the road to be detected [15]. If it is too low, the line is seen as two separate step edges. In addition, noise will have a greater influence on the results. If the scale parameter is too high, smoothing may flatten the line and the plausibility will be too low. For a discussion of line-junction parameters see [5]. In the matching step, we have a parameter determining the minimal distance allowed between the line junction and a road extremity (N m ). This neighborhood must be large enough, in case of high imprecision, to allow re-localization of the road segment closer to the line. On the other hand, it should not be too large because the road segment extremity may be matched with the wrong junction and then we would move the road segment away from the line (Figure 18). The snake process has three parameters, ff; fi and. First, when the rigidity parameter (ff) is high with respect to the elasticity parameter (fi), the final road segment is straight (Figure 19(a)). Conversely, when ff is small compared with fi, the final segment tends to oscillate (Figures 19(b) and 19(c)). The main difference between Figure 19(b) and Figure 19(c) is the number of iterations, which is higher for 19(c) (53:6 iterations per pixel) than for 19(b) (35:2 iterations per pixel). In 14
15 general, we have observed that when ff =0and fi increases, the number of iterations also increases. For roads, it is suitable that rigidity should be more important than elasticity, but the snake must be able to fit a low-curvature road. The initial mean progression ( ) of the snake influences the number of iterations required to optimize the energy. If the initial increases then the number of iterations decreases. For example, to obtain the result showed in Figure 19(d), it took 60:3 iterations per pixel and to obtain the result showed in Figure 8, it took 54:6 iterations per pixel. However, beyond a certain value, the snake may fall outside the attraction zone and then evolve in a wrong direction (Figure 19(e)). In the new-road hypothesis generation step, there are two influencing parameters : L and S. L is the number of pixels on which the mean direction is computed. It mainly influences the search direction in the case where there is no direct neighbor and a remote searching is required (step 4 of the line-following algorithm). For example, in Figure 20(a), we used L =5, and we can see that some lines are missing in comparison with Figure 16(b) where L =10. S is the minimum length of line to be kept (Figures 20(b) and 20(c)). 8 Conclusion We have presented our approach for automatically updating road maps from high-resolution aerial images. The updating of road database involves the addressing of two problems, i.e., the correction of existing road location and the addition of new roads. Since roads are long and thin elements in high-resolution images, the method is based on the detection of line. Existing road location is not precise, but can provide a good indication of where roads are located as opposed to other elements such as rivers or railways. For correction, we use a snake approach, with the existing location as the line initialization. To bring the initialization closer to the road, we re-localize road segments on the basis of line junctions from the image. We include a line junction detector developed in our Vision Laboratory at Département de Mathématiques-Informatique at Université de Sherbrooke [5]. We have presented some results obtained with an image and a database provided by Geomatics Canada. For the addition of new roads, we apply a line-following algorithm starting from image junctions which are in the existing network. We have shown that there is an advantage for using line junctions by comparing with Klang s approach. References [1] M.-F. Auclair-Fortier, D. Ziou, C. Armenakis, and S. Wang. Nouvelles perspectives en détection de contour: Textures et images multi-spectrales. In Vision Interface, Trois-Rivières, Canada, May [2] M.-F. Auclair-Fortier, D. Ziou, C. Armenakis, and S. Wang. Survey of Work on Road Extraction in Aerial and Satellite Images. Technical Report 247, Département de mathématiques et d informatique, Université de Sherbrooke, [3] A. Baumgartner, C.T. Steger, H. Mayer, and W. Eckstein. Semantic Objects and context for Finding Roads. In David M. McKeown, Jr., J. Chris McGlone, and Olivier Jamet, editors, 15
16 Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision III, Proc. SPIE 3072, pages , april [4] M.-O. Berger. Les contours actifs : modélisation, comportement et convergence. Thèse de Doctorat, Institut National Polytechnique de Lorraine, [5] F. Deschênes and D. Ziou. Detection of Line Junctions and Line Terminations Using Curvilinear Features. Technical Report 235, Département de mathématiques et d informatique, Université de Sherbrooke, [6] R. Fiset, F. Cavayas, B. Solaiman M.-C. Mouchot, and R. Desjardins. Map-Image Matching Using Multi-Layer Perceptron: the Case of Road Network. ISPRS Journal of Photogrammetry and Remote Sensing, 53(2):76 84, [7] P. Fua and Y. G. Leclerc. Model Driven Edge Detection. Machine Vision and Application, 3:45 56, [8] A. Gruen, P. Agouris, and H. Li. Linear Feature Extraction with Dynamic Programming and Globally Enforced Least Squares Matching. In A. Gruen, O. Kuebler, and P. Agouris, editors, Automatic Extraction of Man-Made Objects from Aerial and Space Images, pages 83 94, Basel, Switzerland, Birkhäuser Verlag. [9] B. Guindon. Application of Spatial Reasoning Methods to the Extraction of Roads from High Resolution Satellite Imagery. In IGARSS, [10] B. Guindon. The Extraction of Planimetric Features from High Resolution Satellite Imagery Using Image Segmentation and Spatial-Based Reasoning. In International Symposium, Geomatics in the Era of RADARSAT (GER 97), [11] C. Heipke, H. Mayer, C. Wiedemann, and O. Jamet. Evaluation of Automatic Road Extraction. Proceedings of International Archives of Photogrammetry and Remote Sensing, XXXII:47 56, [12] M. Kass, A. Witkin, and D. Terzopoulos. Snakes : Active Contour Models. The International Journal of Computer Vision, 1(4): , [13] Dan Klang. Automatic Detection of Changes in Road Databases Using Satellite Imagery. In Proceedings of International Archives of Photogrammetry and Remote Sensing, volume 32, pages , [14] D.M. McKeown and J.L. Denlinger. Cooperative Methods for Road Tracking in Aerial Imagery. In Proceedings of IEEE, International Conference on Computer Vision and Pattern Recognition, pages , [15] C. Steger. An Unbiased Detector of Curvilinear Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(2): , February [16] D. Ziou and S. Tabbone. Edge Detection Techniques - An Overview. Pattern Recognition and Image Analysis, 8(4): ,
17 (a) (b) (c) Figure 11: a) Original image ( ) with vectors from the road database. b) Line plausibility (ff = 2). c) Detected lines (threshold is 20). d) Junctions (threshold is 0:1) and extremities (black squares). White squares are matched junctions (N m =13). (d) 17
18 (e) Figure 11: e) Initialization of snakes after re-localization. f) Final result (ff =100and fi =4). (f) (a) Figure 12: Results without junction detection. a) Matching of road segments with image. White squares are points matched. b) Final result (ff =100and fi =4). (b) 18
19 (a) Figure 13: a) Super-imposition of vectors from the road database, junctions (white squares are matched junctions), and road-segment extremities on original image ( ). Thresholds for line detection and junction detection are 20 and 0:15 respectively and the neighborhood for matching is
20 (b) Figure 13: b) Line plausibility (ff =2:5). 20
21 (c) Figure 13: c) Final result (ff = 100 and fi =0). 21
22 (a) Figure 14: Klang s approach a) Super-imposition of vectors the from road database, maximum correlation points (white squares) and road-segment extremities ( ) on original image. Threshold for line detection is 20 and neighborhood (N m N m ) for matching is
23 (b) Figure 14: c) Final result (ff = 100 and fi =0). 23
24 (a) (b) (c) Figure 15: a) Original image ( ) with vectors from the road database (white lines) and line junctions (black squares). b) Line plausibility (ff = 2:5). c) Final result of the correction step (ff =100and fi =0). d) Final result of the new road detection step. (d) 24
25 (a) (b) Figure 16: Detection of new roads. a) Line junctions near the corrected network. b) Result (L = 10 and S =7) x x x (a) Line profile (b) First derivative Figure 17: Line detection for small ff. (c) Second derivative 25
26 (a) (b) Figure 18: a) Matching with N m = 15. b) Final result of the correction step with ff = 100;fi = 4; =1. (a) ff = 500;fi = 4; =1 (b) ff =0;fi =4; =1 (c) ff = 0;fi = 100; =1 (d) ff = 100;fi = 4; =0:1 (e) ff = 100;fi = 4; =3 Figure 19: Results of road correction with different parameter values 26
27 (a) L =5, S =7 (b) L =10, S =3 (c) L =10, S =15 Figure 20: New roads for different values of L and S. 27
AUTOMATED CORRECTION AND UPDATING OF ROADS FROM AERIAL ORTHO-IMAGES
AUTOMATED CORRECTION AND UPDATING OF ROADS FROM AERIAL ORTHO-IMAGES Marie-Flavie AUCLAIR FORTIER 1,DjemelZIOU 1, Costas ARMENAKIS 2 and Shengrui WANG 1 1 Département de mathématiques 2 Center for et informatique
More informationAUTOMATED UPDATING OF ROAD INFORMATION FROM AERIAL IMAGES
AUTOMATED UPDATING OF ROAD INFORMATION FROM AERIAL IMAGES Marie-Flavie Auclair Fortier 1, Djemel Ziou 1 Costas Armenakis 2 and Shengrui Wang 1 1 Département de mathématiques 2 Center for et informatique
More informationAN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK
AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK Xiangyun HU, Zuxun ZHANG, Jianqing ZHANG Wuhan Technique University of Surveying and Mapping,
More informationSurface Contents Author Index
Surface Contents Author Index Xiangyun HU & C.Vincent Tao AUTOMATIC MAIN ROAD EXTRACTION FROM HIGH RESOLUTION SATELLITE IMAGERY Xiangyun HU, C.Vincent Tao Geospatial Information and Communication (Geo-ICT)
More informationAUTOMATIC ROAD EXTRACTION FROM IRS SATELLITE IMAGES IN AGRICULTURAL AND DESERT AREAS
AUTOMATIC ROAD EXTRACTION FROM IRS SATELLITE IMAGES IN AGRICULTURAL AND DESERT AREAS Uwe BACHER and Helmut MAYER Institute for Photogrammetry and Cartography Bundeswehr University Munich D-85577 Neubiberg,
More informationAutomatic updating of urban vector maps
Automatic updating of urban vector maps S. Ceresola, A. Fusiello, M. Bicego, A. Belussi, and V. Murino Dipartimento di Informatica, Università di Verona Strada Le Grazie 15, 37134 Verona, Italy Abstract.
More informationAutomatic Extraction of Roads from High Resolution Aerial and Satellite Images with Heavy Noise
Automatic Extraction of Roads from High Resolution Aerial and Satellite Images with Heavy Noise Yan Li and Ronald Briggs Abstract Aerial and satellite images are information rich. They are also complex
More informationUPDATE OF ROADS IN GIS FROM AERIAL IMAGERY: VERIFICATION AND MULTI-RESOLUTION EXTRACTION
UPDATE OF ROADS IN GIS FROM AERIAL IMAGERY: VERIFICATION AND MULTI-RESOLUTION EXTRACTION A. Baumgartner 1, C. Steger 2, C. Wiedemann 1, H. Mayer 1, W. Eckstein 2, H. Ebner 1 1 Lehrstuhl für Photogrammetrie
More informationOutline 7/2/201011/6/
Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationFUZZY-CLASSIFICATION AND ZIPLOCK SNAKES FOR ROAD EXTRACTION FROM IKONOS IMAGES
FUZZY-CLASSIFICATION AND ZIPLOCK SNAKES FOR ROAD EXTRACTION FROM IKONOS IMAGES Uwe Bacher, Helmut Mayer Institute for Photogrammetry and Catrography Bundeswehr University Munich D-85577 Neubiberg, Germany.
More informationBuilding Roof Contours Extraction from Aerial Imagery Based On Snakes and Dynamic Programming
Building Roof Contours Extraction from Aerial Imagery Based On Snakes and Dynamic Programming Antonio Juliano FAZAN and Aluir Porfírio Dal POZ, Brazil Keywords: Snakes, Dynamic Programming, Building Extraction,
More informationDIGITAL TERRAIN MODELLING. Endre Katona University of Szeged Department of Informatics
DIGITAL TERRAIN MODELLING Endre Katona University of Szeged Department of Informatics katona@inf.u-szeged.hu The problem: data sources data structures algorithms DTM = Digital Terrain Model Terrain function:
More informationAUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES
AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES Yandong Wang Pictometry International Corp. Suite A, 100 Town Centre Dr., Rochester, NY14623, the United States yandong.wang@pictometry.com
More informationAdaptative Elimination of False Edges for First Order Detectors
Adaptative Elimination of False Edges for First Order Detectors Djemel ZIOU and Salvatore TABBONE D~partement de math~matiques et d'informatique, universit~ de Sherbrooke, Qc, Canada, J1K 2R1 Crin/Cnrs
More informationUniversity of Technology Building & Construction Department / Remote Sensing & GIS lecture
5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)
More informationMassachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision Quiz I Handed out: 2004 Oct. 21st Due on: 2003 Oct. 28th Problem 1: Uniform reflecting
More informationComputer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town
Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of
More informationCorner Detection. Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology
Corner Detection Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu April 11, 2006 Abstract Corners and edges are two of the most important geometrical
More informationHuman-Computer Interaction in Map Revision Systems
Human-Computer Interaction in Map Revision Systems Jun Zhou and Walter F. Bischof Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 {jzhou, wfb}@cs.ualberta.ca Terry
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More information6. Object Identification L AK S H M O U. E D U
6. Object Identification L AK S H M AN @ O U. E D U Objects Information extracted from spatial grids often need to be associated with objects not just an individual pixel Group of pixels that form a real-world
More informationTypes of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image
More informationTraining i Course Remote Sensing Basic Theory & Image Processing Methods September 2011
Training i Course Remote Sensing Basic Theory & Image Processing Methods 19 23 September 2011 Geometric Operations Michiel Damen (September 2011) damen@itc.nl ITC FACULTY OF GEO-INFORMATION SCIENCE AND
More informationExtracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang
Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationOverview. Image Geometric Correction. LA502 Special Studies Remote Sensing. Why Geometric Correction?
LA502 Special Studies Remote Sensing Image Geometric Correction Department of Landscape Architecture Faculty of Environmental Design King AbdulAziz University Room 103 Overview Image rectification Geometric
More informationVariational Methods II
Mathematical Foundations of Computer Graphics and Vision Variational Methods II Luca Ballan Institute of Visual Computing Last Lecture If we have a topological vector space with an inner product and functionals
More informationSnakes operating on Gradient Vector Flow
Snakes operating on Gradient Vector Flow Seminar: Image Segmentation SS 2007 Hui Sheng 1 Outline Introduction Snakes Gradient Vector Flow Implementation Conclusion 2 Introduction Snakes enable us to find
More informationGeometric Rectification of Remote Sensing Images
Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in
More informationAUTOMATIC IMAGE ORIENTATION BY USING GIS DATA
AUTOMATIC IMAGE ORIENTATION BY USING GIS DATA Jeffrey J. SHAN Geomatics Engineering, School of Civil Engineering Purdue University IN 47907-1284, West Lafayette, U.S.A. jshan@ecn.purdue.edu Working Group
More informationPOSITIONING A PIXEL IN A COORDINATE SYSTEM
GEOREFERENCING AND GEOCODING EARTH OBSERVATION IMAGES GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF REMOTE SENSING AN INTRODUCTORY TEXTBOOK CHAPTER 6 POSITIONING A PIXEL IN A COORDINATE SYSTEM The essential
More informationSIFT - scale-invariant feature transform Konrad Schindler
SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective
More informationREGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh
REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,
More informationFl(P) -> Left contribution. P Fr(P) -> Right contribution P-1 P-2 P+2 P+1. F(P) = [ Fl(P) + Fr(P) ]
Perceptual Organization of thin networks with active contour functions applied to medical and aerial images. Philippe Montesinos, Laurent Alquier LGIP - Parc scientique G. BESSE NIMES, F-30000 E-mail:
More informationIMPROVING CARTOGRAPHIC ROAD DATABASES BY IMAGE ANALYSIS
IMPROVING CARTOGRAPHIC ROAD DATABASES BY IMAGE ANALYSIS Chunsun Zhang, Emmanuel Baltsavias Institute of Geodesy and Photogrammetry, Swiss Federal Institute of Technology Zurich, ETH Hoenggerberg, CH-8093
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationOther Linear Filters CS 211A
Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin
More informationCAP 5415 Computer Vision Fall 2012
CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented
More informationROAD NETWORK EXTRACTION FROM SAR IMAGERY SUPPORTED BY CONTEXT INFORMATION
ROAD NETWORK EXTRACTION FROM SAR IMAGERY SUPPORTED BY CONTEXT INFORMATION B. Wessel Photogrammetry and Remote Sensing, Technische Universitaet Muenchen, 80290 Muenchen, Germany birgit.wessel@bv.tum.de
More informationAn Improved Snake Model for Automatic Extraction of Buildings from Urban Aerial Images and LiDAR Data Using Genetic Algorithm
An Improved Snake Model for Automatic Extraction of Buildings from Urban Aerial Images and LiDAR Data Using Genetic Algorithm Mostafa Kabolizade 1, Hamid Ebadi 2, Salman Ahmadi 3 1. Ph.D. Student in Photogrammetry
More informationA DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,
More informationROBUST DETECTION OF ROAD JUNCTIONS IN VHR IMAGES USING AN IMPROVED RIDGE DETECTOR
ROBUS DEECION OF ROAD JUNCIONS IN VHR IMAGES USING AN IMPROVED RIDGE DEECOR S.Gautama, W.Goeman, J.D'Haeyer Dept. elecommunication and Information Processing, Ghent University St.Pietersnieuwstraat 41,
More informationExperiments with Edge Detection using One-dimensional Surface Fitting
Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,
More information2D image segmentation based on spatial coherence
2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics
More informationMONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION
MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION Mohamed Ibrahim Zahran Associate Professor of Surveying and Photogrammetry Faculty of Engineering at Shoubra, Benha University ABSTRACT This research addresses
More informationExtracting Curvilinear Structures: A Differential Geometric Approach
Extracting Curvilinear Structures: A Differential Geometric Approach Carsten Steger Forschungsgruppe Bildverstehen (FG BV) Informatik IX, Technische Universität München Orleansstr. 34, 81667 München, Germany
More informationconvolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection
COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:
More informationAutomatic Extraction of Main Road Centerlines from High Resolution Satellite Imagery Using Hierarchical Grouping
Automatic Extraction of Main Road Centerlines from High Resolution Satellite Imagery Using Hierarchical Grouping Xiangyun Hu and Vincent Tao Abstract Automatic road centerline extraction from high-resolution
More informationAUTOMATIC ROAD EXTRACTION FROM DENSE URBAN AREA BY INTEGRATED PROCESSING OF HIGH RESOLUTION IMAGERY AND LIDAR DATA
AUTOMATIC ROAD EXTRACTION FROM DENSE URBAN AREA BY INTEGRATED PROCESSING OF HIGH RESOLUTION IMAGERY AND LIDAR DATA Xiangyun Hu C. Vincent Tao Yong Hu Geospatial Information and Communication Technology
More informationA New Class of Corner Finder
A New Class of Corner Finder Stephen Smith Robotics Research Group, Department of Engineering Science, University of Oxford, Oxford, England, and DRA (RARDE Chertsey), Surrey, England January 31, 1992
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationIMAGE CORRECTION FOR DIGITAL MAPPING
IMAGE CORRECTION FOR DIGITAL MAPPING Yubin Xin PCI Geomatics Inc. 50 West Wilmot St., Richmond Hill, Ontario L4B 1M5, Canada Fax: 1-905-7640153 xin@pcigeomatics.com ABSTRACT Digital base maps generated
More informationHOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationIntensity Augmented ICP for Registration of Laser Scanner Point Clouds
Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using
More informationDIGITAL HEIGHT MODELS BY CARTOSAT-1
DIGITAL HEIGHT MODELS BY CARTOSAT-1 K. Jacobsen Institute of Photogrammetry and Geoinformation Leibniz University Hannover, Germany jacobsen@ipi.uni-hannover.de KEY WORDS: high resolution space image,
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationAdvanced Video Content Analysis and Video Compression (5LSH0), Module 4
Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl
More informationBUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA
BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA C. K. Wang a,, P.H. Hsu a, * a Dept. of Geomatics, National Cheng Kung University, No.1, University Road, Tainan 701, Taiwan. China-
More informationTopic 4 Image Segmentation
Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive
More informationTerrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens
Terrain correction and ortho-rectification Terrain correction Rüdiger Gens Why geometric terrain correction? Backward geocoding remove effects of side looking geometry of SAR images necessary step to allow
More informationImage Segmentation Based on Watershed and Edge Detection Techniques
0 The International Arab Journal of Information Technology, Vol., No., April 00 Image Segmentation Based on Watershed and Edge Detection Techniques Nassir Salman Computer Science Department, Zarqa Private
More informationSpatial Enhancement Definition
Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing
More informationAERIAL IMAGE MATCHING BASED ON ZERO-CROSSINGS
AERIAL IMAGE MATCHING BASED ON ZERO-CROSSINGS Jia Zong Jin-Cheng Li Toni Schenk Department of Geodetic Science and Surveying The Ohio State University, Columbus, Ohio 43210-1247 USA Commission III ABSTRACT
More informationSCIENCE & TECHNOLOGY
Pertanika J. Sci. & Technol. 26 (1): 309-316 (2018) SCIENCE & TECHNOLOGY Journal homepage: http://www.pertanika.upm.edu.my/ Application of Active Contours Driven by Local Gaussian Distribution Fitting
More informationBroad field that includes low-level operations as well as complex high-level algorithms
Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and
More informationCHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS
CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary
More information1. Two double lectures about deformable contours. 4. The transparencies define the exam requirements. 1. Matlab demonstration
Practical information INF 5300 Deformable contours, I An introduction 1. Two double lectures about deformable contours. 2. The lectures are based on articles, references will be given during the course.
More informationFeature Detectors and Descriptors: Corners, Lines, etc.
Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood
More informationSpatial Interpolation & Geostatistics
(Z i Z j ) 2 / 2 Spatial Interpolation & Geostatistics Lag Lag Mean Distance between pairs of points 1 Tobler s Law All places are related, but nearby places are related more than distant places Corollary:
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationComparison of Feature Detection and Matching Approaches: SIFT and SURF
GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student
More informationCOORDINATE TRANSFORMATION. Lecture 6
COORDINATE TRANSFORMATION Lecture 6 SGU 1053 SURVEY COMPUTATION 1 Introduction Geomatic professional are mostly confronted in their work with transformations from one two/three-dimensional coordinate system
More informationDiscovering Visual Hierarchy through Unsupervised Learning Haider Razvi
Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping
More informationEdge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)
Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing
More informationUlrik Söderström 16 Feb Image Processing. Segmentation
Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationGood Continuation of Boundary Contour
Thinh Nguyen 10/18/99 Lecture Notes CS294 Visual Grouping and Recognition Professor: Jitendra Malik Good Continuation of Boundary Contour Review Factors that leads to grouping. Similarity (brightness,
More informationColor Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model
This paper appears in: IEEE International Conference on Systems, Man and Cybernetics, 1999 Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model
More informationResearch on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information.
, pp. 65-74 http://dx.doi.org/0.457/ijsip.04.7.6.06 esearch on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information Zhao Lei, Wang Jianhua and Li Xiaofeng 3 Heilongjiang
More informationProf. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying
Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying Problem One of the challenges for any Geographic Information System (GIS) application is to keep the spatial data up to date and accurate.
More informationSpatial Interpolation - Geostatistics 4/3/2018
Spatial Interpolation - Geostatistics 4/3/201 (Z i Z j ) 2 / 2 Spatial Interpolation & Geostatistics Lag Distance between pairs of points Lag Mean Tobler s Law All places are related, but nearby places
More informationModel-based segmentation and recognition from range data
Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This
More informationABSTRACT 1. INTRODUCTION
Published in SPIE Proceedings, Vol.3084, 1997, p 336-343 Computer 3-d site model generation based on aerial images Sergei Y. Zheltov, Yuri B. Blokhinov, Alexander A. Stepanov, Sergei V. Skryabin, Alexander
More informationLocal Features: Detection, Description & Matching
Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British
More informationDENSE 3D POINT CLOUD GENERATION FROM UAV IMAGES FROM IMAGE MATCHING AND GLOBAL OPTIMAZATION
DENSE 3D POINT CLOUD GENERATION FROM UAV IMAGES FROM IMAGE MATCHING AND GLOBAL OPTIMAZATION S. Rhee a, T. Kim b * a 3DLabs Co. Ltd., 100 Inharo, Namgu, Incheon, Korea ahmkun@3dlabs.co.kr b Dept. of Geoinformatic
More informationThe Accuracy of Determining the Volumes Using Close Range Photogrammetry
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) e-issn: 2278-1684,p-ISSN: 2320-334X, Volume 12, Issue 2 Ver. VII (Mar - Apr. 2015), PP 10-15 www.iosrjournals.org The Accuracy of Determining
More informationA 3D Point Cloud Registration Algorithm based on Feature Points
International Conference on Information Sciences, Machinery, Materials and Energy (ICISMME 2015) A 3D Point Cloud Registration Algorithm based on Feature Points Yi Ren 1, 2, a, Fucai Zhou 1, b 1 School
More informationImplementing the Scale Invariant Feature Transform(SIFT) Method
Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The
More informationMATLAB and photogrammetric applications
MALAB and photogrammetric applications Markéta Potůčková Department of applied geoinformatics and cartography Faculty of cience, Charles University in Prague Abstract Many automated processes in digital
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 14 Edge detection What will we learn? What is edge detection and why is it so important to computer vision? What are the main edge detection techniques
More informationCoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction
CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationSIMULATIVE ANALYSIS OF EDGE DETECTION OPERATORS AS APPLIED FOR ROAD IMAGES
SIMULATIVE ANALYSIS OF EDGE DETECTION OPERATORS AS APPLIED FOR ROAD IMAGES Sukhpreet Kaur¹, Jyoti Saxena² and Sukhjinder Singh³ ¹Research scholar, ²Professsor and ³Assistant Professor ¹ ² ³ Department
More information