A Supervised and Fuzzy-based Approach to Determine Optimal Multi-resolution Image Segmentation Parameters

Size: px
Start display at page:

Download "A Supervised and Fuzzy-based Approach to Determine Optimal Multi-resolution Image Segmentation Parameters"

Transcription

1 qxd 9/14/12 7:35 PM Page 1029 A Supervised and Fuzzy-based Approach to Determine Optimal Multi-resolution Image Segmentation Parameters Hengjian Tong, Travis Maxwell, Yun Zhang, and Vivek Dey Abstract Image segmentation is important for object-based classification. One of the most advanced image segmentation techniques is multi-resolution segmentation implemented by ecognition. Multi-resolution segmentation requires users to determine a set of proper segmentation parameters through a trial-and-error process. To achieve accurate segmentations of objects of different sizes, several sets of segmentation parameters are required: one for each level. However, the trial-and-error process is time consuming and operator dependent. To overcome these problems, this paper introduces a supervised and fuzzy-based approach to determine optimal segmentation parameters for ecognition. This approach is referred to as the Fuzzy-based Segmentation Parameter optimizer (FBSP optimizer) in this paper. It is based on the idea of discrepancy evaluation to control the merging of sub-segments to reach a target segment. Experiments demonstrate that the approach improves the segmentation accuracy by more than 16 percent, reduces the operation time from two hours to one-half hour, and is operator independent. Introduction Since the successful launch of the first very high resolution ( 1m) satellite in 1999 (Ikonos, with multispectral (MS) 4 m and panchromatic (Pan) 1 m), object-based classification has become increasingly important in remote sensing (Blaschke, 2010). This is because: 1. traditional pixel-based classification, which classifies an image based on the digital values of individual pixels, cannot effectively handle the spectral variance within a land-cover object or class, and Hengjian Tong is with the School of Computer Science, China University of Geosciences, Wuhan, China, and formerly with the Department of Geodesy and Geomatics Engineering, University of New Brunswick, Canada (thj26@cug.edu.cn). Travis Maxwell is with the Department of National Defence, Ottawa, Canada and formerly with the Department of Geodesy and Geomatics Engineering, University of New Brunswick, Canada. Yun Zhang is with the Department of Geodesy and Geomatics Engineering, University of New Brunswick, Canada. Vivek Dey is with the Department of Geomatics, University of Calgary, Canada, and formerly with the Department of Geodesy and Geomatics Engineering, University of New Brunswick, Canada. 2. object-based classification integrates geometric and contextual information of individual objects into the classification, which significantly increases the classification accuracy of very high resolution imagery. In object-based classification, the image must be properly segmented first; the segments can then be classified into corresponding classes based on image objects (segments). The quality of the land-cover classification significantly depends on the quality of the segmentation (Hay et al., 2003; Burnett and Blaschke, 2003; Benz et al., 2004; Maxwell, 2005; Liu et al., 2006; Tian and Chen, 2007; Lang, 2008; Hay and Castilla, 2008; Smith and Morton, 2010; Blaschke, 2010). In general, the quality of segmentation includes over, optimal, and under segmentations (Blashke et al., 2006, Kim et al., 2010, Kim et al., 2011). Optimal segmentation can result in accurate classification. Over segmentation will result in over classification, but it may be corrected in an additional process; however, under segmentation needs to be avoided to avoid the mix of different classes into one class. To date, a number of very high resolution satellites, such as QuickBird, GeoEye-1, and WorldView-2, have been successfully launched, and many digital airborne cameras, such as the ADS-40, DMC, UltraCam, and DIMAC, have been introduced into the market. This rapid sensor advancement has quickly increased the availability of very high resolution remote sensing images in a broad range of geo-related application areas. Hence, object-based classification has quickly become a mainstream technique in classification research and applications (Smith and Morton, 2010; Blaschke, 2010). The first commercially available object-based classification software package (ecognition ) was introduced in 2000 by Definiens (now owned by Trimble). Following its success, other software packages have also been introduced into the market, such as Feature Analyst (from Overwatch Geospatial) and FeatureObjeX (from PCI Geomatics). According to the literature review by Blaschke (2010), 50 to 55 percent articles out of more than 800 papers used ecognition, whereas other articles used individual algorithm or software developed by researchers in individual academic institutions, such as Wuest and Zhang (2009). There were very few articles which used other commercial Photogrammetric Engineering & Remote Sensing Vol. 78, No. 10, October 2012, pp /12/ /$3.00/ American Society for Photogrammetry and Remote Sensing PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING October

2 qxd 9/14/12 7:35 PM Page 1030 software for object-based classifications. In a special issue on object-based classification published in Photogrammetric Engineering & Remote Sensing, Vol. 76, No. 2, 2010), five of seven peer-reviewed papers used ecognition. In a software evaluation conducted at the University of New Brunswick, it was found that ecognition produced the best overall classification results for a broad range of land-cover objects or classes, whereas Feature Analyst produced better result for some selected objects, such as airplanes (Lavigne et al., 2006). In the software comparison by Neubert et al. (2008), it was also found that ecognition is among the best. The popularity of ecognition and its impact on remote sensing research and application are evidentially significant. However, one of the major problems of using the popular multi-resolution algorithm of ecognition for image segmentation is its complexity and user dependency (Hay et al. 2003, Marpu et al., 2010). The segmentation process in ecogntion can also be considered as a black box due to its image dependence and the limited control available to users (Smith and Morton, 2010). Users have to repeatedly select a set of segmentation parameters (mainly scale, shape, and compactness) and test them through a trial-and-error process, until a segmentation result reasonable to the operator is achieved, or until the operator does not want to continue the trial and error any more. This is a tedious and time-consuming process. In addition, it requires experienced operators (Flanders et al., 2003; Hay et al., 2003; Maxwell, 2005; Li et al., 2009; Smith and Morton, 2010). To overcome the time-consuming and user-dependency problems, many researchers have carried out different experiments to identify the relationship between segmentation parameters and segmentation results and to find an effective way for segmentation parameter selection. Costa et al. (2008) proposed a genetic-algorithm-based solution to deduce the optimal scale, shape, and compactness parameters. Tian and Chen (2007) experimented with trial-and-error approach and suggested optimal scale parameters for a few land-cover objects. Other researchers (e.g., Huiping et al., 2003; Möller et al., 2007; Kim et al., 2008; Chen et al., 2009) proposed methods to identify optimal scales, which can be used for the identification of optimal parameters. However, the solutions may just suit specific conditions, and users still must rely on their experience when selecting parameters. Some other researchers suggested using auxiliary data, such as existing GIS datasets, digital maps, and/or lidar data, to guide the segmentation (Smith and Morton, 2010, Kim et al., 2010, Kim et al., 2011, Anders et al., 2011). However, detailed and up-to-date GIS datasets, maps or lidar data are not always available, and mis-registration between the auxiliary data and very high resolution images is still an unsolved technical problem especially in the areas where objects have differing heights (Most very high resolution satellite images are offnadir images. Objects with different heights have different relief distortions which cause mis-registrations between the auxiliary data and the images around the objects.). This paper presents a supervised and fuzzy-based approach to determine optimal image segmentation parameters (smoothness weight, shape weight, and scale value) for the multi-resolution segmentation algorithm of ecognition. The novelty of this approach is that the optimal segmentation parameters for the objects of interest are determined through a training process and a fuzzy logic analysis, which is different from the publications mentioned above. The approach is the further development based on our initial research in which Maxwell (2005) proposed the idea through an internal master s thesis, and Zhang and Maxwell (2006) introduced some initial concepts and results. This paper introduces the technical details of the supervised and fuzzy-based approach based on our further developments and experiments. This approach uses an over-segmented result from ecognition (segments are smaller than the land-cover objects of interest, named subsegments/sub objects) (Benz et al., 2004) to train the Fuzzybased Segmentation Parameter optimizer (FBSP optimizer) developed in this study. The algorithm of the FBSP optimizer is based on the idea of discrepancy evaluation, which can control the merging process of sub objects. The FBSP optimizer can determine the optimal segmentation parameters for a landcover object of interest (also called target object) through a fuzzy logic analysis. The segmentation parameters can then be used in ecognition to segment the entire image, achieving the segmentation of the objects of interest in the entire image. This combination of the FBSP optimizer and ecognition forms a supervised and fuzzy-based approach to image segmentation. To achieve an accurate segmentation for land-cover objects/classes of different sizes, multi-scale segmentation is necessary, i.e., one set of segmentation parameters for each scale. The proposed approach can significantly speed up the process of the multi-scale segmentation by quickly finding different sets of optimal segmentation parameters for objects with different sizes. Other fuzzy-based approaches to image segmentation can be found in recent publications, such as the methods introduced by Wuest and Zhang (2009), and Lizarazo and Barros (2010). However, their segmentation results from QuickBird images appear more like land-use segmentation (i.e., a group of closely located objects such as roofs, roads, parking lots are segmented as one large segment, residential area), rather than land-cover segmentation (i.e., each object, such as a roof, is identified as one segment). This paper focuses on the segmentation of individual land-cover objects. To give readers contextual information on the proposed approach, this paper will first introduce the concept of multi-resolution segmentation which was introduced by Baatz and Schäpe (2000) and is implemented in ecognition. The proposed FBSP optimizer will be introduced in the third section. The process of performing image segmentation using the FBSP optimizer and its segmentation results will be introduced in the fourth section. This will be followed by a result assessment section and conclusions. Multi-resolution Segmentation Algorithm The multi-resolution segmentation algorithm of ecognition is widely-used for segmenting very high resolution optical images (Blaschke, 2010). It is used to produce image object primitives (segments) as the first step of an object-based classification (Baatz and Schäpe, 2000; Benz et al., 2004). The algorithm is based on a region merging technique. It starts with each pixel forming one image region or object. The regions (objects) are merged gradually in an iterative process. At each step a pair of image objects is merged into one larger object. The merging criterion of two adjacent image objects is based on the measurements of two heterogeneity changes (Baatz and Schäpe, 2000; Benz et al., 2004; Definiens Imaging GmbH, 2004; Definiens AG, 2007; Definiens AG, 2009; Trimble Germany GmbH, 2010): 1. spectral heterogeneity change, h spectral, and 2. shape heterogeneity change, h shape. The overall spectral heterogeneity change, h spectral, is a measure of the object heterogeneity difference (i.e., similarity in feature space) resulting from the potential merging of two adjacent objects (obj1 and obj2). It is given by: h spectral a c w c an Obj1 Obj2 # sc Obj1 Obj2 an Obj1 # sc Obj1 n Obj2 # sc Obj2 bb (1) 1030 October 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

3 qxd 9/14/12 7:35 PM Page 1031 where c represents the different raster layers (or spectral bands of an multispectral image), w c are the weights associated with each layer to be determined by the user, n is the number of pixels comprising the objects, and s c is the standard deviation of pixel values within each layer. On the other hand, the overall shape heterogeneity change, h shape, is the weighted average of compactness heterogeneity change, h compact, and smoothness heterogeneity change, h smooth, as given by: h shape w compact # hcompact (1 w compact ) # hsmooth where w compact is the weight associated with the compactness heterogeneity change to be determined by the user. Conceptually, the most compact form describes a circle, whereas the smoothest form describes a rectangle. In Equation 2, the compactness heterogeneity change, h compact, is defined as: h compact n Obj1 Obj2 # an Obj1 # where n is the number of pixels comprising the objects and l is the perimeter of the objects. And, the smoothness heterogeneity change, h smooth, is defined as: h smooth n Obj1 Obj2 # l Obj1 Obj2 b Obj1 Obj2 an Obj1 # l Obj1 b Obj1 l Obj1 Obj2 3n Obj1 Obj2 l Obj1 3n Obj1 n Obj2 # l Obj2 3n Obj2 b n Obj2 # l Obj2 b Obj2 b (2) (3) (4) where n is the number of pixels comprising the objects, l is the perimeter of the objects, and b is the perimeter of the object s bounding box. Together, the weighted sum of h spectral and h shape is indicative of the overall heterogeneity change for the potential merging of two adjacent objects. This overall value is the so-called fusion value (f ), which is given by: f w # hspectral (1 w) # hshape where w is the user assigned weight associated with spectral heterogeneity change. The merging between two adjacent objects will be considered if f s 2, where s is a user specified threshold, referred as the Scale parameter. In summary, the relationships between Equations 1 through 5 and the segmentation parameters (the weights w i, w compact (or 1 w compact ), w (or 1 w) and the scale parameter s) to be determined by users are graphically represented in Figure 1. The final fusion value (f ) in Figure 1 is used to compare with a user defined threshold (scale parameter [s]) to determine whether or not the two adjacent objects should be merged. For more information about problems in fusing adjacent objects, refer to Tian and Chen (2007) and Marpu et al. (2010). Determination of Optimal Segmentation Parameters In ecognition, as mentioned before, determining a proper set of segmentation parameters is not an easy process. This process is highly subjective and is largely based on analyst s experience and interpretation. To overcome the difficulty of the segmentation parameter selection, a FBSP optimizer was developed, which can calculate the optimal segmentation parameters (1 w compact, 1 w, and s) for a land-cover object (5) Figure 1. Relationship between the segmentation parameters and the fusion value in ecognition. The weights for individual spectral layers (bands) (w 1, w 2,...w c ) are set by users according to the application. In this paper, they are set to 1. In addition, users need to give the value for smoothness weight (12w compact ) (or compactness weight (w compact )) and shape weight (1 w). The weights (12w compact ) and (12w) are used to calculate the fusion value (f ). The value f is then compared with a user specified scale value (s) to estimate whether or not the two adjacent objects need to be merged (if f s 2, merge the two objects; if f s 2, stop the merging process.). PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING October

4 qxd 9/14/12 7:35 PM Page 1032 of interest (target object) through a training process and fuzzy logic analysis in an iteration fashion. The FBSP optimizer works based on the idea of discrepancy evaluation, which can control the merging process of sub-objects (sub-segments). It uses sub-objects feature information to evaluate the current segmentation with reference to the desired segmentation (target object). If the sub-segments do not merge to form the desired target object (see Plate 1f and 1g), the FBSP optimizer will go through another iteration and give a new set of more appropriate segmentation parameters. The new segmentation parameters are then used to segment the image; the FBSP then checks whether or not the new segment matches that of the target object. This process iterates until the parameter achieves the desired segmentation result. In other words, the FBSP optimizer does not stop iterating until all sub-objects merge into the ideal target object. Workflow of FbSP Optimizer Figure 2 illustrates the general workflow of the proposed FBSP optimizer. Figure 3 shows the software interface of the FBSP optimizer. This interface allows user to input required information. The optimal segmentation parameters are displayed in this interface. The steps of FBSP optimizer (Figure 2): 1. Perform preliminary segmentation using ecognition. This segmentation is conducted using a small scale parameter (s) with little or no weight given to the shape parameter (1 w). This results in an over-segmented image with the emphasis on spectrally homogeneous objects. In this manner, small Figure 2. Workflow of the proposed FBSP optimizer. The values of the current segmentation parameters (smoothness (1 w compact ), shape (1 w) and scale (s)), sub objects information (Texture, Stability, Brightness, and Area) and target object information (Texture, Stability, Brightness, Area, Rectangular Fit, and Compactness) are inputted into the FBSP optimizer. These values are then used to train the FISs to estimate the optimal segmentation parameters (1 w compact, 1 w, and s) for the target object in an iterative process. (See the section below to see how the variables Texture, Stability, Brightness, Area, Rectangular Fit, and Compactness are calculated.) 1032 October 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

5 qxd 9/14/12 7:35 PM Page 1033 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) Plate 1. (a) QuickBird Pan, 0.7 m, (b) QuickBird MS natural color, 2.8 m, and (c) Pan-sharpened MS, 0.7 m; training of the proposed FBSP optimizer: (d) initial segmentation and sub-objects of a target object (red), (e) merge sub-objects into target object (red) for supervised training, (f) second segmentation, (g) third segmentation, and (h) fourth segmentation; result of the proposed FBSP segmentation: (i) final (FBSP) segmentation, and (k) FBSP segmentation; and comparison with the state-of-the-art segmentation (j) trial-and-error segmentation and (l) trial-and error segmentation of buildings (see the yellow circled areas for quality difference of different segmentation approaches). PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING O c t o b e r

6 qxd 9/14/12 7:35 PM Page 1034 Figure 3. Interface of FBSP optimizer. The user inputs target object information, sub-object information and initial estimates of segmentation parameters. The FBSP optimizer uses these values and a fuzzy logic analysis to determine the optimal segmentation parameters for the target object. details in the image, and more importantly the feature of the object of interest, are retained (see Plate 1d). 2. Select sub-objects of a target object (see Plate 1d and 1e) to establish the relationship between the sub-objects and the target object, and obtain target object information. 3. Input segmentation parameters and sub-objects information for supervised training. The segmentation parameters include smoothness weight (1 w compact ), shape weight (1 w), and scale value (s). The information for each sub-object includes Texture, Stability, Brightness, and Area (see Figure 3 for detail of each feature). 4. Input target object information if it is the first iteration for supervised training. The information of target object includes Texture, Stability, Brightness, Area, Rectangular Fit, and Compactness (see Figure 3 for detail of each feature). 5. Evaluate and estimate Smoothness weight of the target object through Fuzzy Inference System (FIS) according to the target object information (details of the FIS will be discussed below). 6. Evaluate and estimate Scale value/parameter for the target object through a FIS according to the information of sub objects and the target object. 7. Evaluate and estimate Shape weight for the target object through a FIS according to the information of the sub objects and the target object. 8. Modify Scale value based on shape weight of the target object. 9. Obtain estimated smoothness weight, shape weight and scale value for the target object. 10. Perform new segmentation using ecognition with the estimated smoothness, shape, and scale values for the target object (see Plate 1f, 1g, and 1h for result after each iteration). 11. Test for convergence between the segmented object from step 10 and the target object. If the segmented object does not share the same boundary with the target object, repeat steps 3, 6, 7, 8, 9, and 10. If they have the same boundary, go to step End the process and accept the smoothness weight, shape weight and scale value in step 9 as the optimal segmentation parameters for the target object. In Figure 3, the segmentation parameters (scale, shape, and smoothness weight) for performing preliminary segmentation are selected and inputted by user. They are then updated by the FBSP optimizer in the next iteration according to the feature information of sub-objects and target object. The Texture and Stability for sub-objects and target object are calculated using Equation 7 and Equation 9, respectively. The Brightness and Area values are generated by ecognition when a sub-object or target object is created. The Compactness is calculated using Equation 6 and Rectangular Fit is a built-in feature from ecogniton. The details of these features will be discussed in the section below. Design and Development of FbSP Optimizer The core components of the FBSP optimizer are the three Fuzzy Inference Systems (FISs): Scale FIS, Shape FIS, and Smoothness FIS (see Figure 2). Normally, the design of a FIS includes five steps (Klir et al., 1997; Kaehler, 1998; Math- Works, 2006): (a) definition of input variables, (b) design of membership function (also called Fuzzifier, i.e., converting a crisp input variable to a linguistic variable), (c) design of fuzzy rule base (using If-Then fuzzy rule to convert the fuzzy input to the fuzzy output), (d) aggregation (combining all rules fuzzy outputs into a single fuzzy output), and (e) defuzzification (converting a fuzzy output to a crisp value). These five steps are followed in the design and development of the three FISs of the proposed FBSP optimizer October 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

7 qxd 9/14/12 7:35 PM Page 1035 The three FISs are shown in Table 1, Table 2, and Table 3, respectively. Each FIS consists of input variables and their membership functions, output variables and their membership functions, and a rule base; where the membership functions used are (MathWorks, 2006): Gaussian combination membership function, gauss2mf ; triangularshaped built-in membership function, trimf ; trapezoidalshaped built-in membership function, trapmf ; Gaussian curve built-in membership function, gaussmf ; and generalized bell-shaped built-in membership function, gbellmf. Smoothness FIS According to the workflow in Figure 2, the smoothness FIS is evaluated only once and the smoothness weight (1 w compact ) for the target object is left unchanged throughout the duration of the remaining iterations. The reason for this is that the shape properties of the target object constitute the only important factor in determining the smoothness weight. The target object does not change, and so the weight associated to smoothness does not change either. The parameter that does change is the overall shape weight, and this will affect the smoothness of the resulting objects (see Equation 2). Therefore, the smoothness weight is calculated only once, and its importance is modified in each iteration using the shape weight. Smoothness and compactness are not mutually exclusive terms, i.e., an object can be both smooth and compact. Definition of Input Variables An ideal compact object is formed by a square in raster image (see Equation 3). An ideal smooth object is formed by a rectangle (see Equation 4). Smoothness is described using an ecognition feature called Rectangular Fit (Definiens AG, 2007). This feature creates a rectangle of the same area and length-to-width ratio as the object being rated. Once complete, the rectangle is fit to the object, and the object area outside the rectangle is compared to the object area inside (Definiens Imaging GmbH, 2004; Definiens AG, 2007; Definiens AG, 2009; Trimble Germany GmbH, 2010). The fit is then described with a value between 0 (no fit) and 1 (perfect fit) and constitutes the Rect_Fit feature that will be employed in this FIS. Compactness (Compact) is more easily defined using the ratio of the object perimeter to the object area. It is defined identically to the definition of compactness used by ecognition for segmentation. It is defined mathematically as: TABLE 1. ELEMENTS OF SMOOTHNESS FIS Name Range Membership Function Input 1 Rect_Fit [0, 1] MF1 High : gauss2mf,[0.2, 1, 0.1, 2] MF2 Low : gauss2mf,[0.1, -2, 0.2, 0.4] Input 2 Compact [4, 20] MF1 High : gauss2mf,[0.1, 0, 1.5, 4] MF2 Low : gauss2mf,[1.5, 8, 0.1, 25] Output1 Smoothness [0, 1] MF1 Reduce : constant, [0] MF2 Maintain : constant, [0.2] MF3 Increase :constant, [1] Rules If (Rect_Fit is High) and (Compact is Low) then Increase If (Rect_Fit is High) and (Compact is High) then Maintain If (Rect_Fit is Low) and (Compact is High) then Reduce If (Rect_Fit is Low) and (Compact is Low) then Reduce TABLE 2. ELEMENTS OF SCALE FIS Name Range Membership Function Input 1 Texture [0,3TextureTO] MF1 Low : trimf,[-textureto,0,textureto] MF2 Average : trimf,[0.25textureto,textureto,1.25textueto] MF3 High :trapmf,[textureto,2textureto,3textueto,4textueto] Input 2 Stability [0,3StabilityTO] MF1 Low : gaussmf,[0.3 StabilityTO,0] MF2 Average : gbellmf,[0.3 StabilityTO,1, StabilityTO] MF3 High :gauss2mf,[0.3stabilityto, 2StabilityTO, 0.3StabilityTO, StabilityTO] Output1 Scale [0,x] MF1 Reduce : constant,[x] MF2 Maintain : constant,[y] MF3 Increase :constant,[z] Rules If (Texture is Low) and (Stability is Low) then Increase If (Texture is Low) and (Stability is Average) then Increase If (Texture is Average) and (Stability is Low) then Increase If (Texture is Low) and (Stability is High) then Maintain If (Texture is Average) and (Stability is Average) then Maintain If (Texture is High) and (Stability is Low) then Maintain If (Texture is Average) and (Stability is High) then Reduce If (Texture is High) and (Stability is Average) then Reduce If (Texture is High) and (Stability is High) then Reduce PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING October

8 qxd 9/14/12 7:35 PM Page 1036 TABLE 3. ELEMENTS OF SHAPE FIS Name Range Membership Function Input 1 Spectral_Mean [BrightnessTO-3TextureTO, MF1 Standard : gaussmf, BrightnessTO 3TextureTO] [1.5TextureTO, BrightnessTO] Input 2 Size_Difference [0, AverageSize] MF1 Small : trimf,[-10,0,averagesize] Input 3 Lg_Size [0, MaxSize] MF1 Large : gauss2mf,[0.25maxsize, 0.5MaxSize, 0.25MaxSize, MaxSize] Output1 Shape [0, 1] MF1 Reduce : constant, [0.1] MF2 Maintain : constant, [0.5] MF3 Increase :constant, [0.9] Rules If (Spectral_Mean is Standard) and (Size_Difference is Small) then Reduce If (Spectral_Mean is not Standard) and (Size_Difference is not Small) then Maintain If (Lg_Size is Large) then Increase Compact (TO) where l is the border length of the target object, and the number of pixels (area) of the target object. (6) Design of Fuzzy Member Function and Rule Base The membership functions comprising the above features are shown in Table 1. The singletons that compose the three output membership functions (Reduce, Maintain, and Increase) remain constant at all times since the universe of discourse for smoothness is limited to the interval between 0 and 1. Through experimentation with the software and as a result of the literature review, it was decided that compactness on the order of 0.2 is an average value and exhibits reasonable results. Therefore, the three output membership functions are defined as: a. m Reduce Reduce (0.0); b. m Maintain Maintain (0.2); and c. m Increase Increase (1.0). n objm Aggregation and Defuzzification A weighted average of each singleton in output space is assessed, and the defuzzified result is the Smoothness weight (1 w compact ) for the target object. Scale FIS It is known (from the above section) that scale parameter (s) is a critical factor in determining whether or not two adjacent image objects will be merged. According to Equation 5, the fusion value (f ) is a function of both spectral heterogeneity change and shape heterogeneity change. In order to simplify scale FIS design, the initial estimate of the scale parameter (step 7, Figure 2) is based solely on the spectral properties of the sub objects in the initial segmentation. This is because that spectral information is the primary information in the image. It will then be adjusted (step 8, Figure 2) based on the shape parameter. Definition of Input Variables Within a user defined target object (Plate 1d), spectral variance can be described by the internal variance of each sub object (pixel variance), as well as the variance resulting from the spectral means between the sub objects. Therefore, two fuzzy input variables are defined for the scale parameter FIS: (1) mean object texture, Texture, and (2) object stability, Stability. Together, they are used to estimate l 3n objm is current segmentation status as well as the final desired segmentation state. Texture for m sub objects, as proposed in this research, is defined by: 1 Texture (m objects) (7) n a cn objm # 1 merge m C a obj s m c d c where m represents the number of sub objects comprising the target object, n merge is the number of pixels in the resulting merged object, n objm is the number of pixels comprising sub object m, C is the number of spectral layers, and obj s m c is the standard deviation of sub object m in spectral layer c. The proposed Stability feature defines the spectral similarity between objects. Sub objects that are spectrally homogeneous internally may be very different from each other. To ensure an appropriate definition for Stability, we apply ecognition s built-in Mean_Difference_to_Neighbors feature for all bands as defined in the ecognition Reference Book (Definiens Imaging GmbH, 2004; Definiens AG, 2007; Definiens AG, 2009; Trimble Germany GmbH, 2010): obj s m 1 c # obj (8) l a cl p s # obj ƒ sq m obj c sq p c ƒ d p where l is the border length of the object of interest, p represents the number of objects that are direct neighbors obj to the object of interest, l p s is the length of shared border between object of interest and a direct neighbor object p, obj sq m c is the spectral mean value of layer c for the object of obj interest, and sq p c is the spectral mean value of layer c of the direct neighbor object p. Using this feature, we are able to define our own Stability feature for evaluating the similarity of each sub object m to its neighbor objects. Stability for m sub objects is defined in this research as: Stability (m objects) 1 m a c 1 m C a obj s m c d c where m represents the number of sub objects. Design of Fuzzy Member Function and Rule Base The membership functions comprising the above features are shown in Table 2. The Texture and Stability features defined above are applied to the sub-objects (SO) comprising the target object (TO) in order to evaluate the current segmentation status of the system. Using the same features and applying (9) 1036 October 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

9 qxd 9/14/12 7:35 PM Page 1037 them to the target object, the FBSP optimizer can measure the segmentation state that we want to achieve. In doing so, these target object feature values (In Table 2, TextureTO denotes Texture of target object, StabilityTO denotes Stability of target object) play an important role in defining the membership functions. The shape of each membership function is a result of empirically evaluating the success of the system. During successive iterations, the singletons that compose the three output membership functions (Reduce, Maintain, and Increase) are shifted. Since the singletons do not move during the iteration they are considered zero order functions (constant). The three output membership functions are formally defined as: 1. m Reduce Reduce(x), 2. m Maintain Maintain(y), and 3. m Increase Increase(z). where, x is defined as the scale value from the previous iteration, y is the current scale value, and z is a predicted scale value. Considering all these factors, predicted scale value for Increase membership function is defined as: z y (y x) 3n merge # 1m (10) where n merge is the number of pixels comprising the merged object, and m is the number of sub objects forming the target object. Aggregation and Defuzzification In the Scale FIS, this step is performed using a weighted average of x, y, and z where each value is weighted by the membership value determined from the firing strength of each rule in the rule base. The result is a single scale value that is the estimated Scale parameter (s) for the next iteration. Shape FIS When objects grow larger, shape plays an increasingly important role. This is particularly true if one or more of the sub-objects that form the target object have significantly different spectral information. In this case, the regionmerging routine may tend to merge with objects outside the object of interest if they are spectrally similar. To prevent this from occurring, shape information becomes increasingly important for successful segmentation. Definition of Input Variables Three different features were defined in shape FIS. The first proposed feature (A) determines which sub-object (m) has the maximum spectral difference compared to the desired target object (M) (Equation 11). The identified sub object (named a) is then used to calculate the Spectral_Mean feature (spectral mean of a) (Equation 12). Given the set M (target object), composed of m sub objects, the subset of objects, A, which has the largest mean spectral difference as defined in this research, is given by: A {a a m for all objects m M where max e 1 (11) C a obj ƒ sq m obj c sq M c ƒ f is True c obj where C is the number of spectral layers, sq m c is the spectral mean value of layer c for the sub object of obj interest (m), and sq M c is the spectral mean value of layer c for the target object (M). In any case, the sub-object with the maximum mean spectral difference is used to determine the proposed Spectral_Mean feature given by: Spectral Mean (m objects) 1 C a c sq c obj a for objects a A. (12) The Spectral_Mean feature is particularly important for urban areas where problems often arise when one object (e.g., air conditioning unit) may be particularly bright while the rest of the rooftop is dark. The larger the spectral difference between the one object and the average rooftop value, the more difficult it may be to merge the objects together based on spectral properties. Instead, maintaining the overall shape of the roof may play increased importance to achieve a satisfying result. The second feature (Size_Difference) explores the size difference between sub object a and the average sub object size (Equation 13). This aids in determining the degree to which Shape should be increased to successfully merge the sub objects. If the air conditioning unit from the previous example is small, then there is an increased chance that it will merge with the surrounding objects based on spectral properties. However, if it is large in size, then the texture change is large and may not merge well with surrounding objects based solely on spectral properties. In this case, shape takes on greater importance. The proposed Size_Difference feature is defined by: Size_Difference (m objects) ` a 1 (13) m a n objm b n obja ` m where m is the number of sub objects forming the target object, n objm is the number of pixels comprising sub object m, and n obja is the number of pixels forming sub object a, where a is the object that satisfies Equation 11. Finally, the third proposed feature, global largest size (Lg_Size), is used to monitor the growth of sub objects. In general, the larger an object grows the more shape is required to achieve a visually convincing result. The proposed Lg_Size feature is defined as: Lg_Size (m objects) max {n objm } for all objects m M (14) where is the number of pixels comprising sub-object m. n objm Design of Fuzzy Member Function and Rule Base The membership functions comprising the above features are shown in Table 3. In this instance, there is only one membership function defined for each feature. In Table 3, BrightnessTO denotes Brightness value of target object, AverageSize denotes average size of all sub-objects, MaxSize denotes maximum size in all sub-objects. The singletons that compose the three output membership functions (Reduce, Maintain, and Increase) remain constant at all times since the universe of discourse for the shape parameter is limited to the interval [0, 0.9]. The maximum value for shape is 0.9 because at least part of the heterogeneity criteria has to come from the image itself (i.e., spectral information). The singletons were balanced in the output space occupying positions of 0.1, 0.5, and 0.9 ensuring at least a little of both shape or spectral criteria in the calculation of heterogeneity change, even at the extremes. The three output membership functions are defined as: 1. m Reduce Reduce (0.1); 2. m Maintain Maintain (0.5); and 3. m Increase Increase (0.9). Aggregation and Defuzzification For the shape FIS, aggregation and defuzzification is carried out by means of a weighted average once again, resulting in estimated Shape parameter (1 w) for the target object. PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING October

10 qxd 9/14/12 7:35 PM Page 1038 Scale Modification, Segmentation, and Convergence In section above, we defined two features based on object spectral properties to guide the selection of an appropriate scale parameter. In reality, the Fusion value (f) is a function of both shape and spectral properties (see Equation 5). Therefore, the scale value must be modified to account for the shape weight determined in the shape FIS (see Figure 2). This modification is defined by: Modified Scale Parameter (1 w) # Scale (15) where Scale is the scale parameter determined in the Scale FIS, 1 w is the shape weight determined in the shape FIS, and Modified Scale Parameter is the estimated scale value(s) for the target object. Experiment and Results The FBSP optimizer and Definiens Developer 7 as well as ecognition Developer 8.0 (Definiens AG, 2007; Definiens AG, 2009) were used for this experiment (Note that ecognition was renamed to Definiens, and has been renamed back to ecognition ). The optimal segmentation parameters for a specified object of interest were obtained through training the FBSP optimizer using a target object (object of interest) and its sub-objects (sub segments). Data Sets The test data sets consisted of a pan-sharpened QuickBird scene of Oromocto, and a pan-sharpened Ikonos scene of Fredericton; both cities are in the province of New Brunswick, Canada. Pan-sharpened MS images were used in this study as these images allow for more detailed segmentation than the original MS images. The UNB-PanSharp software (Zhang, 2004) was used for the pan-sharpening because it preserves both the spatial information of Pan image and spectral information of MS image (see Plate 1a, 1b, and 1c). In addition, it is a fully automated one-step process independent of operator s influence. The pan-sharpened images have four spectral bands (blue, green, red, and near infrared), 16 bits, and 0.7 m resolution for QuickBird and 1 m resolution for Ikonos. In the segmentation, the four pan-sharpened multispectral bands were used. They were equally weighted because weighting these multispectral bands equally can provide reasonable results for most applications (Hofmann, 2001). The final segmentation parameters (smoothness weight, shape weight, and scale value) obtained from the FBSP optimizer (Figure 2) were inputted into ecognition to segment the entire image. The processing steps and their results of some land-cover types are presented below. Segmentation of Buildings Plate 1 illustrates the original QuickBird Pan and MS images and the resulting UNB-PanSharp image (Plate 1a, 1b, and 1c), one example of the proposed training of the FBSP optimizer (Plate 1d through 1h), the segmentation result using FBSP (Plate 1i and 1k), and the segmentation results from the trial-and-error approach (Plate 1j and 1l). The steps for the proposed training and supervised segmentation of building, that is, shown in Plate 1d through 1h, is as follows: 1. The first iteration (initial segmentation) (Plate 1d) was conducted using user defined segmentation parameters (scale, shape and smoothness) (Table 4, Iteration 1), and the object of interest (red, six sub-objects) was selected by user. 2. The target object (red) (Plate 1e) was obtained by merging the six sub-objects for supervised training, from which the target object information can be extracted (Table 5; Texture and Stability using Equations 7 and 9; Brightness, Area, Rectangle Fit, and Compactness by ecognition ). 3. The segmentation of second iteration (Plate 1f) was obtained using the segmentation parameters (scale, shape, and smoothness) calculated by the FBSP optimizer in the second iteration (Table 4, Iteration 2). 4. The third and fourth segmentation results (Plate 1g and 1h) were obtained using the segmentation parameters from FBSP optimizer in the third and fourth iterations (Table 4, Iteration 3 and 4). 5. The building segmentation result of the entire image (Plate 1i, sub-scene) was achieved using the segmentation parameters from the fourth iteration. 6. For each iteration, the number of sub-objects and their feature information are presented in Table 6. The Texture and Stability values were calculated using Equations 7 and 9, whereas the Brightness and Area values were calculated by ecognition once the sub objects were formed. From Plate 1, we can see that the segmented buildings using FBSP optimizer (Plate 1h and 1k) are much more accurate than the buildings segmented using ecognition s trial-and-error approach (yellow circles of Plate 1j and 1l). In addition, the FBSP optimizer obtained the final segmentation parameters after four iterations in one-half hour. This included the time taken to manually input the information in Tables 4, 5, and 6 iteratively into the FBSP optimizer (Figure 3). If the input process can be integrated into ecognition software through an Application Programming Interface (API), the time can be reduced to a few minutes. However, using ecognition s trial-and-error approach, TABLE 4. SEGMENTATION PARAMETERS USED IN EACH TRAINING ITERATION OF THE BUILDING SEGMENTATION Iteration TABLE 6. NUMBER OF SUB-OBJECTS AND THEIR FEATURE INFORMATION AFTER EACH ITERATION Feature information Seg. Parameter Scale Shape Smoothness TABLE 5. TARGET OBJECT FEATURE INFORMATION Texture Stability Brightness Area Rectangle Fit Compactness Iteration Sub-objects Texture Stability Brightness Area First Second Third Fourth October 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

11 qxd 9/14/12 7:35 PM Page 1039 an experienced operator may need to spend more than two hours to select and test segmentation parameters to achieve the best possible segmentation result. In the end, the desired segmentation result may still not be achieved (see yellow circles in Plate 1j and 1l). but only two iterations were required to obtain a proper set of segmentation parameters for forest segmentation. From the red highlighted segments in Plate 2b, 2c, and 2d, it can be seen that forests with similar scale (size) to that of the training object (Plate 2b) can be optimally segmented. The segmentation parameters selected by user for the initial segmentation is shown in iteration 1 of Table 7. The optimal segmentation parameters for forest segmentation as estimated by the FBSP optimizer are shown in iteration 2 of Table 7. The corresponding feature information of the target object (forest) and those of the corresponding sub-objects are shown in Table 8 and Table 9, respectively. Segmentation of Forests Plate 2 shows the training process of the FBSP optimizer (Plate 2a, 2b, and 2c) and the segmentation of forests using the FBSP optimizer and ecognition (Plate 2d). The pan-sharpened QuickBird 0.7 m MS image was used as input. The training and segmentation processes are the same as those of building segmentation described above, (a) (b) (c) (d) Plate 2. Training process of the FBSP optimizer (a) initial segmentation and sub-objects for forest (red), (b) merge sub-objects into target object (red) for supervising training, (c) second iteration of FBSP achieving optimal parameters, and (d) in the segmentation of forests in ecognition environment (Definiens 7) using the parameters from FBSP optimizer. PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING O c t o b e r

12 qxd 9/14/12 7:35 PM Page 1040 TABLE 7. SEGMENTATION PARAMETERS IN EACH ITERATION OF FOREST SEGMENTATION Iteration Parameter 1 2 Scale Shape Smoothness TABLE 8. TARGET OBJECT FEATURE INFORMATION Texture Stability Brightness Size Rectangle Fit Compactness TABLE 9. NUMBER OF SUB-OBJECTS AND THEIR FEATURE INFORMATION Feature information Iteration Sub-object Texture Stability Brightness Area First Second Segmentation of Commercial Complexes Plate 3a, 3b, and 3c show the segmentation process of commercial complexes using the FBSP optimizer and pansharpened Ikonos 1 m MS image. Because the spectral variation on the roof of individual complexes is not large, optimal segmentation parameters can be achieved by the FBSP optimizer in the second iteration. Plate 3d shows the final segmentation result using ecognition Developer 8 and the segmentation parameters from the second iteration of FBSP. Result Assessment, Discussion, and Recommendation Result Assessment The purpose of this research was to develop a new approach that improves both the efficiency of image segmentation parameter selection and the accuracy of the segmentation result using ecognition. Therefore, the final image segmentation results are compared with those from the trial-anderror process suggested by ecogition, in terms of (a) segmentation accuracy, (b) processing speed, and (c) independence of operator s experience. For the accuracy assessment, a second year master s student focusing on image segmentation was asked to segment different objects of interest in pan-sharpened QuickBird and Ikonos images using both ecognition s trial-and-error process and the FBSP optimizer. The student was selected for two reasons: (a) student had more than a half-year of segmentation experience using ecognition on a daily basis at the time of the assessment, and (b) the student is independent of the algorithm and software development process. Tables 10 and 11 show the evaluation results. The total number of segmented land-cover objects, and the number of nearly perfect (optimal) segmentation, under segmentation, over segmentation, mixed segmentation in each land-cover TABLE 10. SEGMENTATION ACCURACY OF DIFFERENT LAND-COVER OBJECTS USING ECOGNITION S TRIAL-AND-ERROR PROCESS Land-cover object Nearly perfect segmentation Under segmentation Over segmentation Mixed segmentation Total Buildings 22 * Lawn areas Trees Parking lots without cars Parking lots with cars Too complex, cars and parking lot cannot be segmented as one segment. Roads Roads are connected. Almost all segments are over or under segmented. Nearly perfect (optimal) segmentation: Segment boundary is the same or nearly the same as reference boundary. Under segmentation: Segment is less than 10% larger than the reference segment. Over segmentation: Object to be segmented still contains two or more segments. Mixed segmentation: Segment of the object is mixed with an object from another class. * Number of objects in the evaluation area TABLE 11. SEGMENTATION ACCURACY OF DIFFERENT LAND-COVER OBJECTS USING THE PROPOSED SUPERVISED AND FUZZY-BASED APPROACH (I.E., FBSP OPTIMIZER ECOGNITION ) Land-cover object Nearly perfect segmentation Under segmentation Over segmentation Mixed segmentation Total Buildings 33 * Lawn areas Trees Parking lots without cars Parking lots with cars Too complex, cars and parking lot cannot be segmented as one segment. Roads Roads are connected. Almost all segments are over or under segmented. The definitions of Nearly perfect, Under, Over and Mixed segmentations are the same as in Table 10. * Number of objects in the evaluation area 1040 October 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

13 qxd 9/14/12 7:35 PM Page 1041 (a) (b) (c) (d) Plate 3. Training process of the FBSP optimizer (a) initial segmentation and sub-objects of commercial complex (red), (b) merge sub-objects into target object (red) for supervised training, and (c) second iteration if the FBSP achieving optmal parameters, and (d) the segmentation of commercial complexes in ecognition environment (ecognition Developer 8) using the parameters from FBSP optimizer. class were determined using a combined qualitative and quantitative analysis approach. The operator first visually identified the ideal segments of the objects to be assessed in a land-cover class (referred to as reference segments). Then, the difference between the boundaries of the reference segments and the resulting FBSP segments was calculated quantitatively. From Table 12, it can be seen that improvement in the segmentation accuracy ranges from 16 percent (lawn areas) to 55 percent (trees), for nearly perfect (optimal) segments. In terms of the processing speed, the time taken by the master s student to segment the objects of interest is shown in Table 13. It can be seen that the FBSP optimizer can significantly increase the segmentation speed by more than four times. In terms of using the FBSP optimizer, the most PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING time was taken for the manual and iterative inputs of the segmentation parameters and the feature information of the target object and its sub objects (see Figure 3 and Tables 4, 5, and 6). If the software of the FBSP optimizer is integrated into the ecognition software, the segmentation time will be further reduced to several minutes. As shown in Plate 1d and 1e, and Tables 4, 5, and 6, the FBSP optimizer can estimate the optimal segmentation parameters required to merge the sub-segments to form the targeted object segment. Since the optimal segmentation parameters are not dependent on the operator s knowledge and experience, both inexperienced and experienced operators will achieve identical segmentation results using the FBSP optimizer. This means that this approach can be successfully used by those with limited segmentation experience. O c t o b e r

A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION

A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION Yun Zhang a and Travis Maxwell b a Department of Geodesy and Geomatics Engineering, University of New Brunswick, Fredericton,

More information

A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION INTRODUCTION

A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION INTRODUCTION A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION Yun Zhang, PhD Department of Geodesy and Geomatics Engineering University of New Brunswick P.O. Box 4400, Fredericton,

More information

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION Ruonan Li 1, Tianyi Zhang 1, Ruozheng Geng 1, Leiguang Wang 2, * 1 School of Forestry, Southwest Forestry

More information

OBJECT-ORIENTED CLASSIFICATION: CLASSIFICATION OF PAN-SHARPENING QUICKBIRD IMAGERY AND A FUZZY APPROACH TO IMPROVING IMAGE SEGMENTATION EFFICIENCY

OBJECT-ORIENTED CLASSIFICATION: CLASSIFICATION OF PAN-SHARPENING QUICKBIRD IMAGERY AND A FUZZY APPROACH TO IMPROVING IMAGE SEGMENTATION EFFICIENCY OBJECT-ORIENTED CLASSIFICATION: CLASSIFICATION OF PAN-SHARPENING QUICKBIRD IMAGERY AND A FUZZY APPROACH TO IMPROVING IMAGE SEGMENTATION EFFICIENCY TRAVIS MAXWELL September 2005 TECHNICAL REPORT NO. 233

More information

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015 Object-Based Classification & ecognition Zutao Ouyang 11/17/2015 What is Object-Based Classification The object based image analysis approach delineates segments of homogeneous image areas (i.e., objects)

More information

CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS

CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS 85 CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS 5.1 GENERAL Urban feature mapping is one of the important component for the planning, managing and monitoring the rapid urbanized growth. The present conventional

More information

Files Used in This Tutorial. Background. Feature Extraction with Example-Based Classification Tutorial

Files Used in This Tutorial. Background. Feature Extraction with Example-Based Classification Tutorial Feature Extraction with Example-Based Classification Tutorial In this tutorial, you will use Feature Extraction to extract rooftops from a multispectral QuickBird scene of a residential area in Boulder,

More information

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping IMAGINE ive The Future of Feature Extraction, Update & Change Mapping IMAGINE ive provides object based multi-scale image classification and feature extraction capabilities to reliably build and maintain

More information

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION ABSTRACT: Yun Zhang, Pingping Xie, and Hui Li Department of Geodesy and Geomatics Engineering, University of

More information

Figure 1: Workflow of object-based classification

Figure 1: Workflow of object-based classification Technical Specifications Object Analyst Object Analyst is an add-on package for Geomatica that provides tools for segmentation, classification, and feature extraction. Object Analyst includes an all-in-one

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL

BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL Shabnam Jabari, PhD Candidate Yun Zhang, Professor, P.Eng University of New Brunswick E3B 5A3, Fredericton, Canada sh.jabari@unb.ca

More information

DATA FUSION AND INTEGRATION FOR MULTI-RESOLUTION ONLINE 3D ENVIRONMENTAL MONITORING

DATA FUSION AND INTEGRATION FOR MULTI-RESOLUTION ONLINE 3D ENVIRONMENTAL MONITORING DATA FUSION AND INTEGRATION FOR MULTI-RESOLUTION ONLINE 3D ENVIRONMENTAL MONITORING Yun Zhang, Pingping Xie, Hui Li Department of Geodesy and Geomatics Engineering, University of New Brunswick Fredericton,

More information

A Technique for Optimal Selection of Segmentation Scale Parameters for Object-oriented Classification of Urban Scenes

A Technique for Optimal Selection of Segmentation Scale Parameters for Object-oriented Classification of Urban Scenes A Technique for Optimal Selection of Segmentation Scale Parameters for Object-oriented Classification of Urban Scenes Guy Blanchard Ikokou, Julian Smit Geomatics Division, University of Cape Town, Rondebosch,

More information

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES Alaeldin Suliman, Yun Zhang, Raid Al-Tahir Department of Geodesy and Geomatics Engineering, University

More information

UPDATING OBJECT FOR GIS DATABASE INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES: A CASE STUDY ZONGULDAK

UPDATING OBJECT FOR GIS DATABASE INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES: A CASE STUDY ZONGULDAK UPDATING OBJECT FOR GIS DATABASE INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES: A CASE STUDY ZONGULDAK M. Alkan 1, *, D. Arca 1, Ç. Bayik 1, A.M. Marangoz 1 1 Zonguldak Karaelmas University, Engineering

More information

INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC

INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC JOURNAL OF APPLIED ENGINEERING SCIENCES VOL. 1(14), issue 4_2011 ISSN 2247-3769 ISSN-L 2247-3769 (Print) / e-issn:2284-7197 INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC DROJ Gabriela, University

More information

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural

More information

Data: a collection of numbers or facts that require further processing before they are meaningful

Data: a collection of numbers or facts that require further processing before they are meaningful Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

More information

CHANGE DETECTION OF LINEAR MAN-MADE OBJECTS USING FEATURE EXTRACTION TECHNIQUE

CHANGE DETECTION OF LINEAR MAN-MADE OBJECTS USING FEATURE EXTRACTION TECHNIQUE CHANGE DETECTION OF LINEAR MAN-MADE OBJECTS USING FEATURE EXTRACTION TECHNIQUE S. M. Phalke and I. Couloigner Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada T2N 1N4

More information

GEOBIA for ArcGIS (presentation) Jacek Urbanski

GEOBIA for ArcGIS (presentation) Jacek Urbanski GEOBIA for ArcGIS (presentation) Jacek Urbanski INTEGRATION OF GEOBIA WITH GIS FOR SEMI-AUTOMATIC LAND COVER MAPPING FROM LANDSAT 8 IMAGERY Presented at 5th GEOBIA conference 21 24 May in Thessaloniki.

More information

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES Chang Yi 1 1,2,*, Yaozhong Pan 1, 2, Jinshui Zhang 1, 2 College of Resources Science and Technology, Beijing Normal University,

More information

TOWARDS IMPROVING SEGMENTATION OF VERY HIGH RESOLUTION SATELLITE IMAGERY

TOWARDS IMPROVING SEGMENTATION OF VERY HIGH RESOLUTION SATELLITE IMAGERY TOWARDS IMPROVING SEGMENTATION OF VERY HIGH RESOLUTION SATELLITE IMAGERY BEN A. WUEST September 2008 TECHNICAL REPORT NO. 261 TOWARDS IMPROVING SEGMENTATION OF VERY HIGH RESOLUTON SATELLITE IMAGERY Ben

More information

PARAMETER TESTS FOR IMAGE SEGMENTATION OF AN AGRICULTURAL REGION

PARAMETER TESTS FOR IMAGE SEGMENTATION OF AN AGRICULTURAL REGION INTERNATIONAL JOURNAL OF ELECTRONICS; MECHANICAL and MECHATRONICS ENGINEERING Vol.3 Num 2 pp.(515-524) PARAMETER TESTS FOR IMAGE SEGMENTATION OF AN AGRICULTURAL REGION Z. Damla Uça AVCI İstanbul Technical

More information

Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems

Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems Algorithms 2013, 6, 762-781; doi:10.3390/a6040762 Article OPEN ACCESS Algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based

More information

Contextual High-Resolution Image Classification by Markovian Data Fusion, Adaptive Texture Extraction, and Multiscale Segmentation

Contextual High-Resolution Image Classification by Markovian Data Fusion, Adaptive Texture Extraction, and Multiscale Segmentation IGARSS-2011 Vancouver, Canada, July 24-29, 29, 2011 Contextual High-Resolution Image Classification by Markovian Data Fusion, Adaptive Texture Extraction, and Multiscale Segmentation Gabriele Moser Sebastiano

More information

A NEW CLASSIFICATION METHOD FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGE BASED ON MAPPING MECHANISM

A NEW CLASSIFICATION METHOD FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGE BASED ON MAPPING MECHANISM Proceedings of the 4th GEOBIA, May 7-9, 2012 - Rio de Janeiro - Brazil. p.186 A NEW CLASSIFICATION METHOD FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGE BASED ON MAPPING MECHANISM Guizhou Wang a,b,c,1,

More information

URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE

URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE P. Li, H. Xu, S. Li Institute of Remote Sensing and GIS, School of Earth and Space Sciences, Peking

More information

Object-oriented Classification of Urban Areas Using Lidar and Aerial Images

Object-oriented Classification of Urban Areas Using Lidar and Aerial Images Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography Vol. 33, No. 3, 173-179, 2015 http://dx.doi.org/10.7848/ksgpc.2015.33.3.173 ISSN 1598-4850(Print) ISSN 2288-260X(Online)

More information

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING J. Wolfe a, X. Jin a, T. Bahr b, N. Holzer b, * a Harris Corporation, Broomfield, Colorado, U.S.A. (jwolfe05,

More information

The Feature Analyst Extension for ERDAS IMAGINE

The Feature Analyst Extension for ERDAS IMAGINE The Feature Analyst Extension for ERDAS IMAGINE Automated Feature Extraction Software for GIS Database Maintenance We put the information in GIS SM A Visual Learning Systems, Inc. White Paper September

More information

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment)

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment) Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment) Xiaodong Lu, Jin Yu, Yajie Li Master in Artificial Intelligence May 2004 Table of Contents 1 Introduction... 1 2 Edge-Preserving

More information

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION Guifeng Zhang, Zhaocong Wu, lina Yi School of remote sensing and information engineering, Wuhan University,

More information

A NEW STRATEGY FOR DSM GENERATION FROM HIGH RESOLUTION STEREO SATELLITE IMAGES BASED ON CONTROL NETWORK INTEREST POINT MATCHING

A NEW STRATEGY FOR DSM GENERATION FROM HIGH RESOLUTION STEREO SATELLITE IMAGES BASED ON CONTROL NETWORK INTEREST POINT MATCHING A NEW STRATEGY FOR DSM GENERATION FROM HIGH RESOLUTION STEREO SATELLITE IMAGES BASED ON CONTROL NETWORK INTEREST POINT MATCHING Z. Xiong a, Y. Zhang a a Department of Geodesy & Geomatics Engineering, University

More information

Data Fusion. Merging data from multiple sources to optimize data or create value added data

Data Fusion. Merging data from multiple sources to optimize data or create value added data Data Fusion Jeffrey S. Evans - Landscape Ecologist USDA Forest Service Rocky Mountain Research Station Forestry Sciences Lab - Moscow, Idaho Data Fusion Data Fusion is a formal framework in which are expressed

More information

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA C. K. Wang a,, P.H. Hsu a, * a Dept. of Geomatics, National Cheng Kung University, No.1, University Road, Tainan 701, Taiwan. China-

More information

IMAGE DATA AND LIDAR AN IDEAL COMBINATION MATCHED BY OBJECT- ORIENTED ANALYSIS

IMAGE DATA AND LIDAR AN IDEAL COMBINATION MATCHED BY OBJECT- ORIENTED ANALYSIS IMAGE DATA AND LIDAR AN IDEAL COMBINATION MATCHED BY OBJECT- ORIENTED ANALYSIS F.P. Kressler a, *, K. Steinnocher a a ARC systems research, Environmental Planning Department, A-1220 Viena, Austria - (florian.kressler,

More information

HIGH RESOLUTION REMOTE SENSING IMAGE SEGMENTATION BASED ON GRAPH THEORY AND FRACTAL NET EVOLUTION APPROACH

HIGH RESOLUTION REMOTE SENSING IMAGE SEGMENTATION BASED ON GRAPH THEORY AND FRACTAL NET EVOLUTION APPROACH HIGH RESOLUTION REMOTE SENSING IMAGE SEGMENTATION BASED ON GRAPH THEORY AND FRACTAL NET EVOLUTION APPROACH Yi Yang, Haitao Li, Yanshun Han, Haiyan Gu Key Laboratory of Geo-informatics of State Bureau of

More information

Parameter Optimization in Multi-scale Segmentation of High Resolution Remotely Sensed Image and Its Application in Object-oriented Classification

Parameter Optimization in Multi-scale Segmentation of High Resolution Remotely Sensed Image and Its Application in Object-oriented Classification Parameter Optimization in Multi-scale Segmentation of High Resolution Remotely Sensed Image and Its Application in Object-oriented Classification Lan Zheng College of Geography Fujian Normal University

More information

A Toolbox for Teaching Image Fusion in Matlab

A Toolbox for Teaching Image Fusion in Matlab Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 197 ( 2015 ) 525 530 7th World Conference on Educational Sciences, (WCES-2015), 05-07 February 2015, Novotel

More information

Exelis Visual Information Software Solutions for TERRAIN ANALYSIS. Defense & Intelligence SOLUTIONS GUIDE.

Exelis Visual Information Software Solutions for TERRAIN ANALYSIS. Defense & Intelligence SOLUTIONS GUIDE. Exelis Visual Information for TERRAIN ANALYSIS Defense & Intelligence SOLUTIONS GUIDE www.exelisvis.com MISSION SUCCESS The U.S. Armed Forces has long acknowledged the connection between battlefield terrain

More information

Object Based Image Analysis: Introduction to ecognition

Object Based Image Analysis: Introduction to ecognition Object Based Image Analysis: Introduction to ecognition ecognition Developer 9.0 Description: We will be using ecognition and a simple image to introduce students to the concepts of Object Based Image

More information

Recognition with ecognition

Recognition with ecognition Recognition with ecognition Skid trail detection with multiresolution segmentation in ecognition Developer Presentation from Susann Klatt Research Colloqium 4th Semester M.Sc. Forest Information Technology

More information

PROCESS ORIENTED OBJECT-BASED ALGORITHMS FOR SINGLE TREE DETECTION USING LASER SCANNING

PROCESS ORIENTED OBJECT-BASED ALGORITHMS FOR SINGLE TREE DETECTION USING LASER SCANNING PROCESS ORIENTED OBJECT-BASED ALGORITHMS FOR SINGLE TREE DETECTION USING LASER SCANNING Dirk Tiede 1, Christian Hoffmann 2 1 University of Salzburg, Centre for Geoinformatics (Z_GIS), Salzburg, Austria;

More information

Keywords: impervious surface mapping; multi-temporal data; change detection; high-resolution imagery; LiDAR; object-based post-classification fusion

Keywords: impervious surface mapping; multi-temporal data; change detection; high-resolution imagery; LiDAR; object-based post-classification fusion Article An Improved Method for Impervious Surface Mapping Incorporating Lidar Data and High- Resolution Imagery at Different Acquisition Times Hui Luo 1,2, Le Wang 3, *, Chen Wu 4, and Lei Zhang 5 1 School

More information

ArcGIS Pro: Image Segmentation, Classification, and Machine Learning. Jeff Liedtke and Han Hu

ArcGIS Pro: Image Segmentation, Classification, and Machine Learning. Jeff Liedtke and Han Hu ArcGIS Pro: Image Segmentation, Classification, and Machine Learning Jeff Liedtke and Han Hu Overview of Image Classification in ArcGIS Pro Overview of the classification workflow Classification tools

More information

MODELING FOR RESIDUAL STRESS, SURFACE ROUGHNESS AND TOOL WEAR USING AN ADAPTIVE NEURO FUZZY INFERENCE SYSTEM

MODELING FOR RESIDUAL STRESS, SURFACE ROUGHNESS AND TOOL WEAR USING AN ADAPTIVE NEURO FUZZY INFERENCE SYSTEM CHAPTER-7 MODELING FOR RESIDUAL STRESS, SURFACE ROUGHNESS AND TOOL WEAR USING AN ADAPTIVE NEURO FUZZY INFERENCE SYSTEM 7.1 Introduction To improve the overall efficiency of turning, it is necessary to

More information

ENVI. Get the Information You Need from Imagery.

ENVI. Get the Information You Need from Imagery. Visual Information Solutions ENVI. Get the Information You Need from Imagery. ENVI is the premier software solution to quickly, easily, and accurately extract information from geospatial imagery. Easy

More information

Chapter 7 Fuzzy Logic Controller

Chapter 7 Fuzzy Logic Controller Chapter 7 Fuzzy Logic Controller 7.1 Objective The objective of this section is to present the output of the system considered with a fuzzy logic controller to tune the firing angle of the SCRs present

More information

Definiens. Professional 5. Reference Book. Definiens AG.

Definiens. Professional 5. Reference Book. Definiens AG. Definiens Professional 5 Reference Book Definiens AG www.definiens.com 1 Algorithms Reference Imprint and Version Document Version 5.0.6.1 Copyright 2006 Definiens AG. All rights reserved. This document

More information

EVALUATION OF VARIOUS SEGMENTATION TOOLS FOR EXTRACTION OF URBAN FEATURES USING HIGH RESOLUTION REMOTE SENSING DATA

EVALUATION OF VARIOUS SEGMENTATION TOOLS FOR EXTRACTION OF URBAN FEATURES USING HIGH RESOLUTION REMOTE SENSING DATA EVALUATION OF VARIOUS SEGMENTATION TOOLS FOR EXTRACTION OF URBAN FEATURES USING HIGH RESOLUTION REMOTE SENSING DATA Vandita Srivastava Email ID : Vandita@iirs.gov.in Indian Institute of Remote Sensing

More information

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK Xiangyun HU, Zuxun ZHANG, Jianqing ZHANG Wuhan Technique University of Surveying and Mapping,

More information

ROAD EXTRACTION IN SUBURBAN AREAS BASED ON NORMALIZED CUTS

ROAD EXTRACTION IN SUBURBAN AREAS BASED ON NORMALIZED CUTS In: Stilla U et al (Eds) PIA07. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36 (3/W49A) ROAD EXTRACTION IN SUBURBAN AREAS BASED ON NORMALIZED CUTS A. Grote

More information

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Pankaj Kumar 1*, Alias Abdul Rahman 1 and Gurcan Buyuksalih 2 ¹Department of Geoinformation Universiti

More information

VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS

VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS J. Leitloff 1, S. Hinz 2, U. Stilla 1 1 Photogrammetry and Remote Sensing, 2 Remote Sensing Technology Technische Universitaet Muenchen, Arcisstrasse

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH INTRODUCTION

MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH INTRODUCTION MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH Meghan Graham MacLean, PhD Candidate Dr. Russell G. Congalton, Professor Department of Natural Resources & the Environment University

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY INTRODUCTION

MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY INTRODUCTION MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY Angelos Tzotsos, PhD Student Demetre Argialas, Professor Remote Sensing Laboratory National Technical University

More information

IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA

IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA C.J. van der Sande a, *, M. Zanoni b, B.G.H. Gorte a a Optical and Laser Remote Sensing, Department of Earth Observation and Space systems, Delft

More information

BUILDINGS CHANGE DETECTION BASED ON SHAPE MATCHING FOR MULTI-RESOLUTION REMOTE SENSING IMAGERY

BUILDINGS CHANGE DETECTION BASED ON SHAPE MATCHING FOR MULTI-RESOLUTION REMOTE SENSING IMAGERY BUILDINGS CHANGE DETECTION BASED ON SHAPE MATCHING FOR MULTI-RESOLUTION REMOTE SENSING IMAGERY Medbouh Abdessetar, Yanfei Zhong* State Key Laboratory of Information Engineering in Surveying, Mapping and

More information

Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty

Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty Remote Sens. 2015, 7, 5980-6004; doi:10.3390/rs70505980 Article OPEN ACCESS remote sensing ISSN 2072-4292 www.mdpi.com/journal/remotesensing Image Segmentation Based on Constrained Spectral Variance Difference

More information

ENVI Automated Image Registration Solutions

ENVI Automated Image Registration Solutions ENVI Automated Image Registration Solutions Xiaoying Jin Harris Corporation Table of Contents Introduction... 3 Overview... 4 Image Registration Engine... 6 Image Registration Workflow... 8 Technical Guide...

More information

A NEW ALGORITHM FOR AUTOMATIC ROAD NETWORK EXTRACTION IN MULTISPECTRAL SATELLITE IMAGES

A NEW ALGORITHM FOR AUTOMATIC ROAD NETWORK EXTRACTION IN MULTISPECTRAL SATELLITE IMAGES Proceedings of the 4th GEOBIA, May 7-9, 2012 - Rio de Janeiro - Brazil. p.455 A NEW ALGORITHM FOR AUTOMATIC ROAD NETWORK EXTRACTION IN MULTISPECTRAL SATELLITE IMAGES E. Karaman, U. Çinar, E. Gedik, Y.

More information

Object-Based Correspondence Analysis for Improved Accuracy in Remotely Sensed Change Detection

Object-Based Correspondence Analysis for Improved Accuracy in Remotely Sensed Change Detection Proceedings of the 8th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences Shanghai, P. R. China, June 25-27, 2008, pp. 283-290 Object-Based Correspondence

More information

FUZZY-CLASSIFICATION AND ZIPLOCK SNAKES FOR ROAD EXTRACTION FROM IKONOS IMAGES

FUZZY-CLASSIFICATION AND ZIPLOCK SNAKES FOR ROAD EXTRACTION FROM IKONOS IMAGES FUZZY-CLASSIFICATION AND ZIPLOCK SNAKES FOR ROAD EXTRACTION FROM IKONOS IMAGES Uwe Bacher, Helmut Mayer Institute for Photogrammetry and Catrography Bundeswehr University Munich D-85577 Neubiberg, Germany.

More information

OBJECT IDENTIFICATION AND FEATURE EXTRACTION TECHNIQUES OF A SATELLITE DATA: A REVIEW

OBJECT IDENTIFICATION AND FEATURE EXTRACTION TECHNIQUES OF A SATELLITE DATA: A REVIEW OBJECT IDENTIFICATION AND FEATURE EXTRACTION TECHNIQUES OF A SATELLITE DATA: A REVIEW Navjeet 1, Simarjeet Kaur 2 1 Department of Computer Engineering Sri Guru Granth Sahib World University Fatehgarh Sahib,

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

GeoImaging Accelerator Pansharpen Test Results. Executive Summary

GeoImaging Accelerator Pansharpen Test Results. Executive Summary Executive Summary After demonstrating the exceptional performance improvement in the orthorectification module (approximately fourteen-fold see GXL Ortho Performance Whitepaper), the same approach has

More information

BerkeleyImageSeg User s Guide

BerkeleyImageSeg User s Guide BerkeleyImageSeg User s Guide 1. Introduction Welcome to BerkeleyImageSeg! This is designed to be a lightweight image segmentation application, easy to learn and easily automated for repetitive processing

More information

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA Sander Oude Elberink* and Hans-Gerd Maas** *Faculty of Civil Engineering and Geosciences Department of

More information

SUPERPIXELS: THE END OF PIXELS IN OBIA. A COMPARISON OF STATE-OF-THE- ART SUPERPIXEL METHODS FOR REMOTE SENSING DATA

SUPERPIXELS: THE END OF PIXELS IN OBIA. A COMPARISON OF STATE-OF-THE- ART SUPERPIXEL METHODS FOR REMOTE SENSING DATA SUPERPIXELS: THE END OF PIXELS IN OBIA. A COMPARISON OF STATE-OF-THE- ART SUPERPIXEL METHODS FOR REMOTE SENSING DATA O. Csillik * Department of Geoinformatics Z_GIS, University of Salzburg, 5020, Salzburg,

More information

TrueOrtho with 3D Feature Extraction

TrueOrtho with 3D Feature Extraction TrueOrtho with 3D Feature Extraction PCI Geomatics has entered into a partnership with IAVO to distribute its 3D Feature Extraction (3DFE) software. This software package compliments the TrueOrtho workflow

More information

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota OPTIMIZING A VIDEO PREPROCESSOR FOR OCR MR IBM Systems Dev Rochester, elopment Division Minnesota Summary This paper describes how optimal video preprocessor performance can be achieved using a software

More information

Hybrid Model with Super Resolution and Decision Boundary Feature Extraction and Rule based Classification of High Resolution Data

Hybrid Model with Super Resolution and Decision Boundary Feature Extraction and Rule based Classification of High Resolution Data Hybrid Model with Super Resolution and Decision Boundary Feature Extraction and Rule based Classification of High Resolution Data Navjeet Kaur M.Tech Research Scholar Sri Guru Granth Sahib World University

More information

INTEGRATION OF TREE DATABASE DERIVED FROM SATELLITE IMAGERY AND LIDAR POINT CLOUD DATA

INTEGRATION OF TREE DATABASE DERIVED FROM SATELLITE IMAGERY AND LIDAR POINT CLOUD DATA INTEGRATION OF TREE DATABASE DERIVED FROM SATELLITE IMAGERY AND LIDAR POINT CLOUD DATA S. C. Liew 1, X. Huang 1, E. S. Lin 2, C. Shi 1, A. T. K. Yee 2, A. Tandon 2 1 Centre for Remote Imaging, Sensing

More information

Hands on Exercise Using ecognition Developer

Hands on Exercise Using ecognition Developer 1 Hands on Exercise Using ecognition Developer 2 Hands on Exercise Using ecognition Developer Hands on Exercise Using ecognition Developer Go the Windows Start menu and Click Start > All Programs> ecognition

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

RSPS2001 Proceedings Algorithms 79

RSPS2001 Proceedings Algorithms 79 Detecting urban features from IKONOS data using an object-oriented approach Hofmann, P. DEFiNiENS Imaging GmbH, Trappentreustr. 1, D-80339 Munich, Germany PHofmann@definiens.com http: www.definiens.com

More information

A. Benali 1, H. Dermèche 2, E. Zigh1 1, 2 1 National Institute of Telecommunications and Information Technologies and Communications of Oran

A. Benali 1, H. Dermèche 2, E. Zigh1 1, 2 1 National Institute of Telecommunications and Information Technologies and Communications of Oran Elimination of False Detections by Mathematical Morphology for a Semi-automatic Buildings Extraction of Multi Spectral Urban Very High Resolution IKONOS Images A. Benali 1, H. Dermèche 2, E. Zigh1 1, 2

More information

ENVI THE PREMIER SOFTWARE FOR EXTRACTING INFORMATION FROM GEOSPATIAL DATA

ENVI THE PREMIER SOFTWARE FOR EXTRACTING INFORMATION FROM GEOSPATIAL DATA ENVI THE PREMIER SOFTWARE FOR EXTRACTING INFORMATION FROM GEOSPATIAL DATA HarrisGeospatial.com BENEFITS Use one solution to work with all your data types Access a complete suite of analysis tools Customize

More information

FOOTPRINTS EXTRACTION

FOOTPRINTS EXTRACTION Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,

More information

Image Analysis With the Definiens Software Suite

Image Analysis With the Definiens Software Suite Image Analysis With the Definiens Software Suite Definiens Enterprise Image Intelligence Andreas Kühnen, Senior Sales Manager Malte Sohlbach, Systems Engineering Manager August 2009 Definiens AG 1986 Prof.

More information

Efficacious approach for satellite image classification

Efficacious approach for satellite image classification Journal of Electrical and Electronics Engineering Research Vol. 3(8), pp. 143-150, October 2011 Available online at http://www.academicjournals.org/jeeer ISSN 2141 2367 2011 Academic Journals Full Length

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

Hyperspectral Image Segmentation using Homogeneous Area Limiting and Shortest Path Algorithm

Hyperspectral Image Segmentation using Homogeneous Area Limiting and Shortest Path Algorithm Hyperspectral Image Segmentation using Homogeneous Area Limiting and Shortest Path Algorithm Fatemeh Hajiani Department of Electrical Engineering, College of Engineering, Khormuj Branch, Islamic Azad University,

More information

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction Compression of RADARSAT Data with Block Adaptive Wavelets Ian Cumming and Jing Wang Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall, Vancouver, BC, Canada

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 4, APRIL

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 4, APRIL IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 4, APRIL 2014 753 Quality Assessment of Panchromatic and Multispectral Image Fusion for the ZY-3 Satellite: From an Information Extraction Perspective

More information

Progressing from object-based to object-oriented image analysis

Progressing from object-based to object-oriented image analysis Chapter 1.2 Progressing from object-based to object-oriented image analysis M. Baatz 1, C. Hoffmann 1, G. Willhauck 1 1 Definiens AG, Munich, Germany; {mbaatz, choffmann, gwillhauck}@definiens.com KEYWORDS:

More information

PHYSICAL BARRIER DETECTION FOR UPDATING OF NAVIGATION DATABASES FROM HIGH RESOLUTION SATELLITE IMAGERY

PHYSICAL BARRIER DETECTION FOR UPDATING OF NAVIGATION DATABASES FROM HIGH RESOLUTION SATELLITE IMAGERY PHYSICAL BARRIER DETECTION FOR UPDATING OF NAVIGATION DATABASES FROM HIGH RESOLUTION SATELLITE IMAGERY Ma Li a,b, *, Anne Grote c, Christian Heipke c, Chen Jun a, Jiang Jie a a National Geomatics Center

More information

One category of visual tracking. Computer Science SURJ. Michael Fischer

One category of visual tracking. Computer Science SURJ. Michael Fischer Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for

More information

Extraction of cross-sea bridges from GF-2 PMS satellite images using mathematical morphology

Extraction of cross-sea bridges from GF-2 PMS satellite images using mathematical morphology IOP Conference Series: Earth and Environmental Science PAPER OPEN ACCESS Extraction of cross-sea bridges from GF-2 PMS satellite images using mathematical morphology To cite this article: Chao Chen et

More information

Land Cover Classification Techniques

Land Cover Classification Techniques Land Cover Classification Techniques supervised classification and random forests Developed by remote sensing specialists at the USFS Geospatial Technology and Applications Center (GTAC), located in Salt

More information

Image feature extraction from the experimental semivariogram and its application to texture classification

Image feature extraction from the experimental semivariogram and its application to texture classification Image feature extraction from the experimental semivariogram and its application to texture classification M. Durrieu*, L.A. Ruiz*, A. Balaguer** *Dpto. Ingeniería Cartográfica, Geodesia y Fotogrametría,

More information

Aerial photography: Principles. Visual interpretation of aerial imagery

Aerial photography: Principles. Visual interpretation of aerial imagery Aerial photography: Principles Visual interpretation of aerial imagery Overview Introduction Benefits of aerial imagery Image interpretation Elements Tasks Strategies Keys Accuracy assessment Benefits

More information

Limit of the Paper should not be more than 3000 Words = 7/8 Pages) Abstract: About the Author:

Limit of the Paper should not be more than 3000 Words = 7/8 Pages) Abstract: About the Author: SEMI-AUTOMATIC EXTRACTION OF TOPOGRAPHICAL DATABASE FROM HIGH RESOLUTION MULTISPECTRAL REMOTE SENSING DATA KASTURI ROY 1, ALTE SAKET RAOSAHEB 2 1 Student, Symbiosis Institute of Geoinformatics 2 Student,

More information

Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis with Lidar and RGB Imagery

Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis with Lidar and RGB Imagery Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis with Lidar and RGB Imagery Victoria Price Version 1, April 16 2015 Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis

More information

ASSESSMENT OF REMOTE SENSING IMAGE SEGMENTATION QUALITY

ASSESSMENT OF REMOTE SENSING IMAGE SEGMENTATION QUALITY ASSESSMENT OF REMOTE SENSING IMAGE SEGMENTATION QUALITY M. Neubert *, H. Herold Leibniz Institute of Ecological and Regional Development (IOER), Weberplatz 1, D-01217 Dresden, Germany - (m.neubert, h.herold)@ioer.de

More information