Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty

Size: px
Start display at page:

Download "Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty"

Transcription

1 Remote Sens. 2015, 7, ; doi: /rs Article OPEN ACCESS remote sensing ISSN Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty Bo Chen 1, Fang Qiu 2, *, Bingfang Wu 1 and Hongyue Du 3 1 Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing , China; s: chenbo2013@radi.ac.cn (B.C.); wubf@radi.ac.cn (B.W.) 2 Geospatial Information Sciences, University of Texas at Dallas, Dallas, TX 75080, USA 3 China Mapping Technology Service Corporation, Beijing , China; duhongyue0620@sina.com * Author to whom correspondence should be addressed; ffqiu@utdallas.edu; Tel.: Academic Editors: Ioannis Gitas and Prasad S. Thenkabail Received: 30 January 2015 / Accepted: 29 April 2015 / Published: 13 May 2015 Abstract: Segmentation, which is usually the first step in object-based image analysis (OBIA), greatly influences the quality of final OBIA results. In many existing multi-scale segmentation algorithms, a common problem is that under-segmentation and over-segmentation always coexist at any scale. To address this issue, we propose a new method that integrates the newly developed constrained spectral variance difference (CSVD) and the edge penalty (EP). First, initial segments are produced by a fast scan. Second, the generated segments are merged via a global mutual best-fitting strategy using the CSVD and EP as merging criteria. Finally, very small objects are merged with their nearest neighbors to eliminate the remaining noise. A series of experiments based on three sets of remote sensing images, each with different spatial resolutions, were conducted to evaluate the effectiveness of the proposed method. Both visual and quantitative assessments were performed, and the results show that large objects were better preserved as integral entities while small objects were also still effectively delineated. The results were also found to be superior to those from econgnition s multi-scale segmentation. Keywords: remote sensing image segmentation; region merging; multi-scale; constrained spectral variance difference; edge penalty

2 Remote Sens. 2015, Introduction The launch of a number of commercial satellites such as IKONOS, GeoEye and WorldView-1, 2, and 3 in the late 1990s has been an exciting development in the field of remote sensing. These satellites provide improved capability to acquire high spatial resolution images. Compared with low and medium resolution images, high spatial resolution images are endowed with more detailed spatial information; however, this detail poses great challenges for traditional image processing approaches, such as pixel-based image classification. Although successfully applied to low and moderate spatial resolution data, pixel-based classification schemes, which treat single pixels as processing units without considering contextual relationships with neighboring pixels, are not sufficient for high spatial resolution data. Because of the well-known issues of spectral and spatial heterogeneity, pixel-based classification often results in a large amount of misclassified noise. As an alternative, object-based image analysis (OBIA or GEOBIA) approaches were developed to classify high spatial resolution data [1 5]. OBIA first partitions imagery into segments, which are homogeneous groups of pixels (often referred to as objects). Then the image classification is performed on the objects (rather than pixels) using various types of information extracted from the objects, such as mean spectral values, shapes, textures and other object-level summary statistics. Since it is the image segmentation process that generates image objects and determines the attributes of the objects, the quality of the segmentation significantly influences the final results of OBIA. Image segmentation has long been studied in the field of computer vision, and has been widely applied in industrial and medical image processing [6,7]. In the field of remote sensing, image segmentation gained popularity in the late 1990s [8], and numerous segmentation algorithms have since been developed. Generally, segmentation algorithms applied in remote sensing can be classified as point-based, edge-based, region-based or hybrids [9 11]. Point-based algorithms usually apply global information of entire image to search and label homogeneous pixels without considering neighborhood [10]. The most well-known point-based algorithm is histogram thresholding segmentation, which assumes, that valleys exist in histogram between different classes. Generally, histogram thresholding includes three steps: histogram modes recognizing, valleys (thresholds) between modes searching and thresholds applying [12]. Point-based methods are simple and quick, but require that different classes have evidently different values in the images. This method may encounter difficulty when processing remotely sensed imagery of a large coverage that exhibits inter-class spectral similarity and intra-class heterogeneity, which may severely deform the histogram modes. Therefore, the histogram thresholding segmentation method is usually applied in the delineation of local objects [12]. Edge-based algorithms exploit the possible existence of a perceivable edge between objects. The two best known algorithms are optimal edge detector [12,13] and watershed segmentation [14,15]. The optimal edge detector first uses the Canny operator [16] to detect edges and then the best count method is utilized to close the edge contours [17]. Watershed segmentation first extracts the gradient information from the original image, and the watershed transformation is then applied to the gradients to generate basins and watersheds. The basins represent the segments and the watersheds the division between them. Edge-based algorithms can quickly partition images; the process is highly accurate for images with obvious edges. However, because edge-based algorithms are primarily based on local contrasts, they are particularly sensitive to noise, which may lead to over-segmentation where a real world

3 Remote Sens. 2015, object is incorrectly partitioned into several small objects. Additionally, because most edge-based algorithms rely on the step edge model [12,18,19], they are less sensitive to blurry boundaries, which may lead to under-segmentation where all or part of a real world object is incorrectly combined with another object. Because of these defects with edge-based segmentation algorithms, region-based approaches were developed and are widely used. Region-based approaches use regions as the basic unit. Attributes of regions are extracted to represent heterogeneity or homogeneity. Heterogeneous regions are then separated and homogeneous regions are merged to form segments. Two major region-based algorithms are the split-and-merge algorithm [20] and the region-growing algorithm [21,22]. The split-and-merge algorithm begins by treating the entire image as a single region. Regions are then iteratively split into sub-regions (usually 4 regions via a quad tree) according to a homogeneity/heterogeneity criterion. The splitting continues until all the regions become homogeneous. A final stage merges homogeneous regions and ensures that neighboring objects are heterogeneous. The region-growing algorithm begins from a set of seed pixels that are successively merged with neighboring pixels according to a heterogeneity/homogeneity criterion. The merging ends when all the pixels are merged and all neighboring objects are heterogeneous. The most significant problem of region-based algorithm is that segmentation errors often occur along the boundaries between regions. To combine the advantages of edge-based and region-based methods, increasingly more researchers have developed hybrid approaches. For example, Pavlidis and Liow [23] and Cortez et al. [24] used the edges generated by edge detection to refine the boundary of split-and-merging segmentation to improve the results. Haris et al. [25] and Castilla, Hay and Ruiz [26] used watersheds for initial segmentation and then merged these initial segments via a region-merging algorithm. Yu and Clausi [27] and Zhang et al. [28,29] added edge information as part of the merging criterion (MC) of a region growing algorithm. These hybrid methods generally provide superior results when compared with those of edge-based or region-based methods. Among existing algorithms, multi-scale segmentation, used by the ecognition software, has been the most widely employed. For example, Baatz and Schäpe [22] adopted a region-growing method using spectral and form heterogeneity changes as merging criteria to generate multi-resolution results. Robinson, Redding and Crisp [30] employed a similar approach by combining spectral variance difference (SVD) and common boundary length as the MC. Zhang et al. [28,29] employed a hybrid method that integrated edge penalty (EP) and standard deviation changes as merging criteria to generate multi-scale segmentation. In these multi-scale segmentation algorithms, the concept of scale plays a key role. However, scale as a threshold for MC often leads to similarly sized segments [22], but the real world is more complicated and contains objects with a large variation in size. Partitioning the image into segments similar in size may simultaneously cause over-segmentation and under-segmentation at a specific scale [31]. The solution to this problem, as offered by the multi-scale approach, is to segment images into differently scaled segmented layers that are linked by an object relation tree; this technique is known as the Fractal Net Evaluation Approach (FNEA) [32]. In FNEA, a layer of segments generated by a specific scale parameter is called an image-object level. Objects at a higher image-object level, in which the scale parameter is larger, are merged from the objects derived from a lower level. Consequently, the same real world objects may have a number of representations at different scale levels. To utilize the information at different scales, multi-scale classification must analyze the attributes

4 Remote Sens. 2015, of various objects at different scales and construct corresponding classification rules. As a result, the analysis in multi-scale segmentation can become formidably complicated. The goal of this study is to develop a new segmentation algorithm that can generate various-sized image objects that are close to their real world counterparts using a single scale parameter. Many existing algorithms [22,30,32 37] use SVD as a MC to describe changes in spectral heterogeneity. However, the SVD is excessively influenced by the object sizes, which is the main cause for the simultaneous over-segmentation and under-segmentation. To address this problem, the proposed approach devises a constrained SVD (CSVD) in the MC to limit the influence of the segment size. Additionally, an EP is incorporated into the MC to increase boundary accuracy. Given these characteristics, the proposed algorithm can be categorized as a hybrid segmentation method. Similar to some other region-based approaches, the proposed approach adopted a three-step strategy. Firstly, a fast scan [37] was applied to produce an over-segmented result. In this stage, every pixel was first treated as a segment, and then the SVD was used as a simple MC to quickly partition the image. In the second stage, a more complex MC based on CSVD and EP was employed to continue the merging process. Region adjacent graph (RAG) [38,39] and nearest neighbor graph (NNG) [25] were used to expedite the merging process. Moreover, the global mutual best-fitting [22] strategy was employed to optimize the merging process. In the third and final stage, minor objects with size smaller than a pre-defined threshold were merged into their most similar neighboring objects to eliminate remaining noise. In order to assess the performance of the proposed method, we performed two groups of experiments. In the first experiment, different parameters of the proposed algorithm were tested to analyze their effects on segmentation performance. For the key parameters, both visual and quantitative assessments were provided. In the second experiment, the results of the proposed method were compared to those from ecognition based on both visual and quantitative assessments. In those quantitative assessments, the rate of over-, under- and well-segmentation was used to evaluate the segmentation quality on small, medium and large objects. Results showed that the proposed algorithm was able to segment the objects properly well regardless of their size using single scale parameter, meanwhile, achieved higher accuracy compared to ecognition multi-scale segmentation. Section 2 presents the study area and data, followed by Section 3 where the proposed method is described in detail. Section 4 shows the experimental results. Finally, conclusions and discussions are provided in Section Study Area and Data Three sets of images with different spatial resolutions, WorldView 2, an aerial image and RapidEye (Figure 1), were chosen as test datasets. Table 1 gives the basic information for these images. Figure 1a shows a pan-sharpened WorldView-2 image with a resolution of 0.6 m; the image covers an area in Hanzhong, China, where the main land cover types are farmland, road and buildings. Figure 1b is an aerial image with a resolution of 1 m covering part of the Three Gorges area, China, containing a small village, farmland and part of a river. Figure 1c is a subset of a RapidEye image for Miyun, China with a spatial resolution of 5 m. A residential area and farmland are located in the center of the image. The upper-left area (colored black in Figure 1c) is a reservoir, and the remainder is mainly forest. For

5 Remote Sens. 2015, convenience, Figure 1a c are hereafter referred to as R1, R2 and R3, respectively. All have 4 bands (blue, green, red and NIR), stretched to the gray scale for parameter comparability. (a) (b) (c) Figure 1. Images of test data. (a) A WorldView-2 image; (b) An aerial image; (c) A RapidEye image. The specific parameters of these images are listed in Table 1. Table 1. Specific parameters of the test images. Image Platform Size Spatial Resolution Position Code a WorldView m Hanzhong R1 b Aerial plane m Three Gorges R2 c RapidEye m Miyun R3

6 Remote Sens. 2015, Methodology The proposed method comprises the following general steps (Figure 2). Initial segments are first produced by a fast scan method. The RAG and NNG are then built based on the initial segmentation. Region merging is applied to the RAG and NNG by using CSVD and EP. Finally, minor objects are eliminated to generate the final result. The segmentation results are quantitatively assessed by an empirical discrepancy method Initial Segmentation Figure 2. General steps of the proposed method. The objective of initial segmentation is to quickly generate segments for the subsequent regionmerging step. In this stage, over-segmentation is allowed, but under-segmentation should be avoided. The initial segmentation is conducted by a quick scan of every pixel from the top-left to the bottom-right of the image. During the scan, each pixel is considered an image object and is compared to its upper-left neighboring objects. If the calculated MC is smaller than a given threshold, then the pixel object is merged with its neighboring object. In this step, only the spectral heterogeneity difference is used as the MC, formulated as follows [22]: hdiff ( n1 n2) hm ( n1h 1 n2h2 ) (1) where h1, h2 are the heterogeneity of two adjacent objects before merging, hm is the heterogeneity after h1 and h2 are merged, and n represents the object size. The heterogeneity h can be computed as the variance of the object: n n ( xi ) xi n 2 i 1 i 1 SS n (2) h n n n where x is the spectral value of each pixel in an object, μ is the mean spectral value of the object, and SS represents the sum of squares of the pixel values. Applying Equation (2) to Equation (1), we obtain the following: hdiff SSm ( n1 n2 ) m ( SS1 n1 1 ) ( SS2 n2 2 ) (3) where SS1, SS2 is sum of squares before merging, SSm is sum of squares after merging. Two hidden relationships exist: ( n1 n2) m n1 1 n2 2 (4) SS m SS 1 SS 2 (5)

7 Remote Sens. 2015, By applying Equations (4) and (5) to Equation (3), we obtain the final spectral heterogeneity difference, which is also referred to as SVD: nn h diff SVD ( 1 2) f ( n1, n2 ) ( 1 2) n1 n (6) 2 n1n2 where f(n1,n2) denotes. n n 1 2 For an image with b bands, the final SVD is the following: b SVDi ( 1 2) i 1 i 1 SVD f ( n1, n2) b b 3.2. RAG and NNG Constructing and Region Merging RAG and NNG Construction RAG is a data structure which describes the segments and their relationships, defined as: G ( V, E) (8) where V is the set of segments, called nodes, and E is the set of edges each of which stands for a neighborhood of two adjacent nodes. Each node carries particular object-level information, including the object ID, size, mean spectral value, and location. Each edge contains information, such as the IDs of adjacent objects, their dissimilarity, and common edge length and strength. Once RAG is established, all information necessary for the merging process, such as the mean spectral value and number of pixels, is stored with the nodes thus the original image is no longer needed. All subsequent processes, including region merging, minor objects elimination and the output of results occur solely on the basis of RAG. The NNG is implemented to accelerate the global mutual best-fitting strategy (see Section 3.2.2) in our algorithm. The NNG is a directed graph that can be described as follows: Gm ( Vm, Em) (9) where Vm represents the set of nodes as in RAG, but Em here represents the directed edges of the nodes, which differs from those in RAG because every edge is directed toward only the neighboring node with the minimum MC in NNG. Figure 3 shows a simple example of RAG and NNG. Figure 3a illustrates the location of the objects, and its corresponding RAG is shown in Figure 3b. Every edge in the RAG represents the neighborhood of two objects, and the number on every edge indicates the MC. Figure 3c shows the NNG that was built based upon the RAG. In the NNG, each node is only linked to its nearest neighbor, which has the minimum MC among its neighbors. Taking node C as an example, C s nearest neighbor is D because D has the minimum MC with C among all of C s neighbor nodes. Thus, an edge is built starting from C to D. There is a special case for the edges in NNG, where bidirectional edges exist between two nodes, such as the two edges between A and B in Figure 3c. This is called a cycle. A global best-fitting object pair must be a cycle. Consequently the global best-fitting merging procedure can be described as follows. First, a cycle heap is constructed by storing all the cycles in the heap. The process of searching for global best-fitting node pairs is actually a search for a cycle with the smallest MC value among all the cycles. Merging is then performed on the object pair that is connected by the cycle with b 2 (7)

8 Remote Sens. 2015, the smallest MC. As the merging continues, the RAG, NNG and cycle heap are updated synchronously to ensure the MC between every object pair is correct each time. Since a cycle is composed of two edges, the worst case would be that the size of the cycle heap is half of the edge number. In other words, all edges are cyclic. Consequently, using a NNG to implement global mutual best-fitting can significantly reduce the merging time. Figure 3. An Example of region adjacent graph (RAG) and nearest neighbor graph (NNG). (a) The location of objects; (b) The RAG and (c) The NNG Region Merging For a given set of initial segments, the merging result depends on the merging order and termination condition. The merging order is decided by the MC and merging strategy. The termination condition is also related to the MC. Merging Criterion The aim of region merging is to integrate homogeneous partitions into larger segments and keep heterogeneous segments separate. Therefore, the MC can be based on either homogeneity or heterogeneity between two objects. In this research, MC is based on heterogeneity; therefore, object pairs with a smaller MC will be preferentially merged. Many region-merging segmentation algorithms use SVD as part of the MC. SVD includes two parts: f(n1,n2) and (μ1-μ2) 2. The former represents the influence of object size, and the latter reflects the impacts of spectral difference. The formula for f(n1,n2), infers that it is a monotonically increasing function given that n1 and n2 are never less than 1. However, when objects in the image vary greatly in size, segmentation using SVD as part of the criterion may cause problems. For example, consider two pairs of objects: one pair has a size of 100 pixels for each object and a spectral difference of 100 gray scales; the other pair consists of two objects that are 10,000 times larger in size but with a spectral difference of According to Equation (6), the SVD of the first pair is 500,000, and that of the second pair is 510,050. Therefore, the former pair has a higher merge priority due to its smaller SVD, even though it has a much greater spectral difference. That means smaller objects are more prone to be merged to their neighbor than larger objects using SVD as part of MC. As a result, many small objects are often incorrectly merged (i.e., under-segmented) due to their higher merge priority, whereas many large

9 Remote Sens. 2015, objects are often partitioned into small objects (i.e., over-segmented) because of their lower merge priority. Consequently, under-segmentation and over-segmentation can simultaneously exist in the results of SVD-based segmentation if the real world objects vary greatly in size. To address this problem, we devised a CSVD to evaluate the spectral heterogeneity difference of two neighboring objects, defined as: CSVD n, CN T, CN1CN2 CN CN i 1 1 if n T if n T 2 b ( ) 1 b 2 2 ( T 1,2,3 ) where n is the object size in the unit of number of pixels, and T is a threshold for object size determined by users. (10) (11) Figure 4. Comparison of f(n1,n2) and f(cn1,cn2). The black contour is the plot for f(n1,n2). The colorful surface is the plot for f(cn1,cn2), in which T equals 200. Figure 4 shows a contour plot of f(n1,n2) and surface plot of f(cn1,cn2). It can be seen that, as n1, and n2 increase, the corresponding f(n1,n2) always becomes larger. While the introduction of threshold T for the f(n1,n2) component in CSVD significantly constrains the effects of the object size (see the surface plot in Figure 4). For an object pair in which both objects are larger than T in size, the f(cn1,cn2) is the same as f(t, T). Therefore, spectral difference will be the main factor that determines the MC because their size factor, denoted by f(cn1,cn2), is the same and will not increase further with object size increases. For a pair in which both objects are smaller than T in size, both the spectral difference and object size exert their full influence on the MC. For a pair in which one object is larger than T and the other is smaller than T, the spectral difference can exert its full influence on the MC, but the object size only partially impacts the MC. The concept is similar to that of human recognition, where

10 Remote Sens. 2015, both the size of an object and its difference in color relative to the background may jointly determine whether it is detected or missed. When two neighboring objects are small, a person may not be able differentiate them, even though the objects have very different colors (similar to being merged together). When two neighboring objects are both larger than a particular size, we can usually differentiate them if their color is sufficiently different (similar to being kept separated). When a small object is next to a large object, the size of the smaller object and the color difference between the small and large objects codetermine whether the small object can be detected by human perception. In order to enhance boundary delineation, an EP [27,29] is also introduced as part of the MC. An EP is a function of edge strength (ES). ES refers to the mean spectral difference between two objects that share a common edge. The formulas for EP and ES are given in Equations (12) and (13), respectively: ε ESmax EP exp( ) (12) ES n ESP i ES 1 (13) n where ε is a variable used to adjust the effect of EP, ESmax is the maximum ES of the initial segmentation, ESP is the pixel spectral difference between the two sides of the common edge, with each side 2 pixels in width, and n is the length of their common edge in pixels. A smaller ES of an object pair corresponds to a greater possibility that merging will occur because the objects EP values are small. The proposed algorithm combines CSVD and EP to generate the final MC. Most previous studies [40 42] integrated various kinds of normalized values in the MC via addition. However, Xiao et al. [43] found that this practice would desensitize particular important components. Sakar, Biswas and Sharma [44] suggested using multiplication to combine area, spectral difference and variance to obtain the final MC; the authors produced good results. To sensitize both CSVD and EP, our algorithm adopts a multiplication strategy to calculate the final MC: MC CSVD EP (14) To re-scale MC to its original order of magnitude, we use the geometric mean to compute the final MC: MC CSVD EP (15) According to Equation (15), if either CSVD or EP equals 0, then the value of MC will be 0, which leads to merging of the object pair. Merging Strategy Baatz and Schäpe [22] listed four potential merging strategies for merging object A with its neighboring object B: fitting (if their MC is smaller than a threshold); best-fitting (if their MC is smaller than a threshold while the MC of A and B is the smallest among those between A and A s neighboring objects); local mutual best-fitting (if their MC is smaller than a threshold and A is the best-fitting neighboring object of B and B is the best-fitting neighboring object of A); and global mutual best-fitting (if A and B are a pair of mutual best-fitting objects and their MC is smallest among all pairs in the image and also smaller than a threshold). Among these four strategies, the latter two are adopted in most region-merging methods because the first two are too simple to work well. Local mutual best-fitting tends to result in segments with similar

11 Remote Sens. 2015, size [22]. In our algorithm, global mutual best-fitting is adopted to determine the order of object merging for objects with similar spectral values Minor Object Elimination After the above steps are performed, minor object elimination is conducted by merging the minor objects, whose sizes are smaller than a threshold, with their most similar neighboring object Quantitative Assessment Method An empirical discrepancy method [31,45], which uses manually identified regions as reference objects, was adopted to quantitatively assess segmentation quality. First, clearly separable areas were manually segmented as the reference objects. Then, over-segmentation and under-segmentation were evaluated by two criteria, the AFI [46] and the EPR [31]. The AFI is defined as follows: Arefer Alargest AFI (16) A where Arefer and Alargest, are, respectively, the areas of the reference object and the largest segment within it. A larger AFI value corresponds to more over-segmentation of the object. To understand EPR, we need to introduce the term effective sub-object, which represents objects consisting of more than 55 percent of the pixels from the reference area [31]. The extra pixels are those pixels included in the effective sub-objects but not included in the reference area. The definition of an effective sub-object and extra pixels are illustrated in Figure 5. The bold black rectangle is the reference area which includes 6 sub-objects. Other polygons are the sub-objects of the reference area, which are produced by segmentation. Sub-object A, B and C are effective sub-objects and the gray areas are the extra pixels. refer Figure 5. A schematic representation of segmentation illustrating effective sub-objects and extra pixels. The bold black rectangle is the reference area which includes 6 sub-objects. Sub-object A, B and C are effective sub-objects and the gray areas are the extra pixels.

12 Remote Sens. 2015, The EPR is defined as follows: A A extra EPR (17) where Aextra is the area of extra pixels and Arefer is the areas of the reference object. EPR indicates the degree of under-segmentation. A larger EPR value corresponds to more under-segmentation of the object. In special cases, no effective sub-object exists, or the area of effective sub-objects in the reference area is too small (less than 55 percent of the reference area in this research). In this situation, we set the EPR to 1, which means that the object is completely under-segmented. In this research, an object is considered over-segmented if the AFI of the object is greater than a given threshold (we used 0.25 as an empirical number in this research). In contrast, when the EPR of an object is greater than a threshold (also 0.25 in this research), the object is considered under-segmented. An objects is regarded as being well-segmented when its EPR and AFI are both smaller than The rates of over-, under- and well-segmented objects are used to assess the accuracy. To assess its performance on objects of varied sizes, all the objects are divided into 3 groups, including small, medium and large objects. The rates of over-, under- and well-segmented objects are calculated for each of these 3 groups. 4. Results and Discussion 4.1. The Effect of Algorithm Parameters In the proposed method, five parameters control the quality of the final segments: the initial segmentation scale, T (the constraining threshold for the object size in CSVD), ε (the control variable in the EP), the scale parameter of MC for region merging, and lastly the minimum object size in the minor object elimination process. Since minor object elimination is a simple process, the effect of the minimum object size on the segmentation results was not evaluated Scale of Initial Segmentation Different SVD thresholds were tested as initial segmentation scales for all three study areas. Since the initial segmentation results were similar, R1 is used as an example for analysis. In Figure 6a, a threshold of 10 was used, and the image was partitioned into 202,608 segments. Figure 6c shows a zoomed-in area of (a). The pixels in the same object have similar values. When the threshold reached 50 (see Figure 6b,d), each object becomes less homogenous and the number of objects dramatically decreases to 22,043. Generally, smaller scales produce segments with higher homogeneity but results in too many partitions (over-segmentation), which may increase the computing time in subsequent steps. In contrast, larger scales generate fewer segments but lead to under-segmentation of particular objects, which cannot be fixed by the subsequent region-merging process. Consequently, a reasonable threshold must be chosen to strike a balance between segmentation quality and processing speed. Our experiments used SVD thresholds for the scale of initial segmentation between 20 and 30, which determined on trials and error using the test images. For the initial segmentation, a small scale is preferred because over-segmentation is allowed in this stage. refer

13 Remote Sens. 2015, Figure 6. Initial segments of R1. (a) Threshold of 10 with 202,608 objects. (b) Threshold of 50 with 22,043 objects. Images (c,d) are zoomed-in area of (a,b), respectively The Constraining Threshold for Object Size in CSVD: T Different T values of 20, 100 and 50,000 were tested in R1, all other parameters remaining the same with initial scale set to 20 and ε to 0. The results are shown in Figure 7a c, respectively. All three tests partitioned the image into 1400 segments. The influence of T can be seen in two aspects. First, large objects tend to merge with their neighbor objects when T is smaller. In Figure 7a, a very small T (20) caused the large objects within the upper-left rectangle to become under-segmented; when T was increased to 100 (Figure 7b), segmentation improved. Different types of farmland with different spectral values were separated, and over-segmentation did not occur. When T was further increased to 50,000 (Figure 7c), this area became over-segmented. Similar outcomes can be observed in the other two rectangles in Figure 7. As T increased, the areas in the two rectangles became increasingly more

14 Remote Sens. 2015, fragmented. For large objects, a small T can keep them integrated, but too small a T may cause particular large objects to be under-segmented. It is worth mentioning that when T was set as 50,000, CSVD was equivalent to SVD, because the largest object in Figure 7c has a size of 11,071 pixels. Second, a smaller T, on the other hand, can also better preserve small objects. Figure 7d is a graph showing statistics for the number of objects with sizes smaller than 1000 pixels in Figure 7a c. In Figure 7d, when T was set to the very small value of 20, approximately 900 objects had sizes smaller than 100 pixels. Among these 900 objects, approximately 700 were smaller than 50 pixels, most of which were noise or minor objects that can be ignored. When T was increased to 100 and 50,000, the number of objects smaller than 100 pixels drastically decreased to approximately 600 and 300, including 300 and 100 objects smaller than 50 pixels, respectively. Therefore, a small T helps maintain small objects; however, a T that is too small will produce too many minor objects and noise. Figure 7. The influence of T in constrained spectral variance difference (CSVD). Images (a c) are the segmentation results of R1. All three tests feature an initial scale of 20, an ε of 0 and 1400 segments. However, in (a c), T was set to 20, 100 and 50,000, respectively. Panel (d) is a statistical graph of the number of objects with sizes smaller than 1000 pixels in (a c). To better assess the effect of T, quantitative assessment was performed on different segmentation results generated by different T values, including 20, 100, 200, 400, 800, 5000 and 50,000, and other parameters were kept the same with those in Figure 7. Figure 8a shows the reference data of R1, which

15 Remote Sens. 2015, include 22 small objects ( pixels), 13 medium size objects ( pixels) and 6 large objects (more than 5000 pixels). Figure 8b d displays the plots of over-, under- and well-segmentation rate, respectively. In Figure 8b, all the over-segmentation rates of small, medium and large objects increase with T. That is because a larger T value makes the f(cn1,cn2) between large objects and their neighbor greater, and thus results in higher priority for small objects to be merged, especially those smaller than 100 pixels. When T was further increased over 800, the rate of over-segmentation for small objects showed a slight decrease, because some of the originally over-segmented small objects became well-segmented or under-segmented. In Figure 8c, when T was set to 20, all objects in the 3 size groups were seriously under-segmented because a large number of minor objects were generated with such a small T (Figure 7d). Since the total number of objects was kept to 1400, the numbers of small, medium, objects resulted were less than they should be due to the number of objects were used up by minor objects. When T was increased to 100 and 200, the number of minor objects sharply decreased (Figure 7d). As a result, the rates of under-segmentation decreased for all object types. However, when T was increased to be larger than 200, some of the small objects were merged to their neighbors and became under-segmented. Figure 8d shows the well-segmentation rates of small, medium, large objects and, additionally their sums. These rates reached their peaks when T was set to 100 or 200. Figure 8. (a) The reference objects of R1, including 22 small objects ( pixels), 13 medium size objects ( pixels) and 6 large objects (more than 5000 pixels); (b d) display the rates of over-segmented, under-segmented and well-segmented objects, respectively.

16 Remote Sens. 2015, The Edge Penalty Control Variable ε An experiment was also performed with different ε values to evaluate the effects of the EP on segmentation. In this test, the initial scale and T were set to 20 and 300, respectively, and the number of segments was forced to 360. Figure 9a c show the results when ε was set to 0, 0.1 and 0.5, respectively. When ε was set to 0, i.e., no EP was included in the merging process (Figure 9a), the CSVD was able to generate an acceptable result, particularly for objects with obvious spectral value differences. Therefore, CSVD alone had the ability to produce good segmentation results. However, inaccurate boundaries appeared between some objects that didn t have perceptible spectral difference along their common boundary. When ε was set to 0.1 (Figure 9b), particular object pairs without clearly defined boundaries merged. This merging is especially obvious in the northern vegetated area. In contrast, many buildings with high edge strength in the settlement area remained partitioned. Figure 9d f show the zoomed-in areas of Figure 9a c. In the white rectangles (Figure 9d,e), the boundary accuracy is significantly improved through the use of the EP. Within the black rectangle of Figure 9f, the edge strength between the farmland and its neighboring vegetated area was not very high. When the edge penalty was given the higher weight of 0.5, the farmland was merged with its neighboring object, although the average spectral difference between the two objects was evident. Therefore, a ε value that is too large may introduce undesired under-segmentation. Figure 9. The influence of edge penalty. Images (a c) are segmentation results of R3. All three tests feature an initial scale of 20, a T of 300 and 360 segments. Only ε varies and was set to 0, 0.1 and 0.5 in (a c), respectively. (d f) are zoomed-in areas of (a c), respectively The Scale Parameter for MC In this test, different MC thresholds that represented segmentation scale parameters were tested on R2, and the results are shown in Figure 10. The MC scale parameter was set to 30, 130 and 200 in

17 Remote Sens. 2015, Figure 10a c respectively. The other parameters (initial scale, ε, and T) remained the same, and were set to 20, 0.1 and 300 respectively. To illustrate more detail in the results, Figure 10d f show zoomed-in areas of Figure 10a c. The MC scale parameter of 10 partitioned the image into 1213 fragments (Figure 10a,d). The objects with minor spectral differences were separated (note the farmland in Figure 10d). When the MC was increased to 130 (Figure 10b,e), the objects with similar spectral values were merged. When the MC reached 200 (Figure 10c,f), more objects were merged, and only those with a significant spectral difference from their neighbors were preserved. Figure 10. The influence of different MC values. All three tests set initial scale to 20, T to 300, and ε to 0.1. Only MC varied and was set to 30, 130 and 200 in (a c), respectively. Images (d f) are zoomed-in areas of (a c), respectively. Like Section 4.1.2, a group of segmentation results were generated by different MC values ranged from 10 to 800, and other parameters remained the same with those in Figure 10. Figure 11a shows the reference data of R2, which include 13 small objects ( pixels), 7 medium size objects ( pixels) and 4 large objects (more than 2000 pixels). Figure 11b d display the plots of over-, under- and well-segmentation rate, respectively. With the increasing of MC value, more and more objects were merged. Therefore, their over-segmentation rates decreased (Figure 11b) while their under-segmentation rates increased (Figure 11c). Figure 11d shows the well-segmentation rates of small, medium, large objects and their sums. It can be seen that all the 3 groups of objects were best-segmented (highest well-segmentation rate) when the same MC value of 130 is used.

18 Remote Sens. 2015, Figure 11. Quantitative assessment results by different MC values in R2. (a) is the reference objects of R2, including 13 small objects ( pixels), 7 medium size objects ( pixels) and 4 large objects (more than 2000 pixels); (b d) display the rates of over-segmented, under-segmented and well-segmented, respectively Comparison with ecognition Software Segmentation ecognition is one of the most widely used commercial OBIA software in remote sensing. The multi-resolution segmentation algorithm in ecognition employs the local mutual best-fitting strategy and uses spectral and shape heterogeneity differences as merging criteria. For spectral heterogeneity, the ecognition segmentation applied a SVD-based criterion [22]. For shape heterogeneity, the ecognition segmentation used compactness and smoothness. In the proposed algorithm, shape heterogeneity could also be incorporated into the merging criterion to make objects compact and smooth. However, this could also jeopardize boundary accuracy [28] and fragment some elongated objects. Since this research focuses primarily on the improvement of spectral heterogeneity differences, the shape weight parameters for ecognition and our algorithm were both set to 0 for our comparison tests. Figure 12 shows the results of the two algorithms applied to R3. Figure 12a,b are the results of ecognition segmentation with the MC scale parameter set to 200 and 700, respectively. Figure 12d h are two zoomed-in areas of Figure 12a,b. In Figure 12a, the upper-left reservoir (black area) was over-segmented but the pools in the rectangle of (d) were correctly segmented. When the scale parameter was increased to 700, the pools became under-segmented, and the reservoir remained over-segmented (see Figure 12b,e). Therefore, it is impossible to simultaneously segment both the reservoir and the pools correctly using a single scale parameter in ecognition. However, this was not a problem for our algorithm. Figure 12c shows our segmentation results with ε, T and the MC scale

19 Remote Sens. 2015, parameter set to 0.1, 100 and 55, respectively. Figure 12f,i are the zoomed-in areas of Figure 12c. The proposed method is able to correctly segment both medium and large size objects, while also preserving the small objects. The ecognition segmentation also generated incorrect merges when the scale parameter was raised to 700. In Figure 12h, a small portion of the water body was incorrectly merged with the bank by ecognition, whereas the proposed algorithm correctly segments the entire water body (Figure 12i). Consequently, the incorporation of EP improved the accuracy of boundary delineation. Figure 12. Comparison of the proposed algorithm and ecognition segmentation applied to R3. Images (a,b) are the results of the ecognition segmentation with scale parameters set to 200 and 700, respectively. Image (c) is the result of the proposed segmentation with ε, T and MC set to 0.1, 100 and 55, respectively. Images (d f) show a zoomed-in area of (a c). Images (g i) show another zoomed-in area of (a c). Their corresponding areas are showed in the white rectangles in (a c).

20 Remote Sens. 2015, For quantitative comparison, the same assessment method used in Section 4.1 was employed. Because the quantitative evaluation of R1, R2 and R3 are similar, we only show the results of R1 here. The reference objects are the same with those used in Section Figure 13a,c,e display the plots of over-, under- and well-segmentation rate of the ecognition segmentation results using different scales. For comparison, the corresponding plots of the segmentation results generated by the proposed algorithm with different MC scales were showed in Figure 13b,d,f. Other parameters (initial scale, ε, and T) remained the same, and were set to 20, 0.1 and 100 respectively. Since both ecognition segmentation and the proposed algorithm are region-merging methods, their over-segmentation rates decrease with the increase of scale parameter (Figure 13a,b). However, their difference is also obvious. In ecognition segmentation, the over-segmentation rate of larger objects is always greater than that of smaller objects (Figure 13a). Similar phenomenon can also be found for the under-segmentation rate (Figure 13c,d). This is because the f(n1,n2) in SVD of smaller objects is smaller, which gives smaller objects higher priority to be merged, while larger objects are prone to be over-segmented. Figure 13e shows the well-segmentation rates of ecognition segmentation. It can be seen that the well-segmentation rate of small objects get its highest value when the scale parameter is small, i.e., 50. Whereas, the well-segmentation rates of the medium and large objects need a higher scale parameter to reach their peaks. Therefore, it is impossible for ecognition segmentation to partition small-, medium- and large-sized objects well simultaneously by using one scale parameter. However, the proposed algorithm can strike good balance among varied-size objects by one scale parameter (around 60). When the MC scale parameter of the proposed algorithm was set to 60, the well-segmentation rate of each group objects is also much higher than the highest = well-segmentation rate of ecognition segmentation. For example, the well-segmentation rate of medium objects by scale 60 using the proposed algorithm (Figure 13f) is about 0.62, which is higher than that in ecognition segmentation by any scale (highest rate is 0.46). The most significant difference between our method and other algorithms is that we used CSVD instead of SVD for MC. The CSVD reduces the influence of object size in the merging process compared with the SVD. In SVD-based algorithms, the MC of large objects pairs with similar spectral values can be enormous, because the corresponding f(n1,n2) can be very large. In CSVD, the influence of f(n1,n 2 ) is constrained. The MC of large objects pairs with similar spectral values can be limited to a small value, thus merging can still be conducted on these objects pairs. Therefore, the proposed algorithm can prevent large objects from being over-segmented. Likewise, the MC of small objects pairs with distinct spectral differences can also be large enough to prevent them from being merged. Additionally, the introduction of an EP, if properly weighted, also improves the accuracy of object boundaries, although an over-weighted edge penalty may jeopardized the merging process. In our experiments, the parameter of minimum object size for minor objects elimination was set to small values (20 50). It was observed that these values of the parameter barely had any influence on the final results. A minimum object size parameter that is too large should be avoided because they may jeopardize the segmentation accuracy. Compared to SVD based method such as ecognition segmentation, the proposed algorithm only involves two extra steps (the calculation of CN and EP), which didn t increase much computational complexity. For example, in a speed test using R2, the proposed method took 6.63 second to partition R2 into 4800 segments, which cost only 0.99 second more than that of the pure SVD based method.

21 Remote Sens. 2015, (a) (b) (c) (d) (e) (f) Figure 13. Quantitative assessment results by different MC values in R2. Plots on the left side of the figure display the over-, under- and well-segmentation rate of ecognition segmentation using different scale parameters. The corresponding plots of the proposed method are displayed on the right part. (a) Rate of over-segmented objects by ecognition segmentation; (b) Rate of over-segmented objects by the proposed method; (c) Rate of under-segmented objects by ecognition segmentation; (d) Rate of under-segmented objects by the proposed method; (e) Rate of well-segmented objects by ecognition segmentation; (f) Rate of well-segmented objects by the proposed method. In the proposed method, five parameters must be set manually. While it is also a common challenge that most commercial segmentation software such as ecognition, and ENVI are facing, because these parameters are often data dependent. However, ideally image segmentation software should provide the automatic configuration and optimization of the parameters and this will be significant part of our future research. Additionally, because the top-to-bottom and left-to-right fast scan method for initial segmentation is relatively simple, a small initial scale is needed to achieve a good accuracy. Unfortunately, a small initial scale leads to excessive initial segments, which substantially increases

22 Remote Sens. 2015, the computational burden. Therefore, a superior initial segmentation method may be explored in further research. 5. Conclusions This research proposes a new algorithm for image segmentation. The goal of the proposed method was to generate objects of varied size which are close to their real-world counterparts in a single scale layer. We introduced constrained spectral variance difference (CSVD) and Edge Penalty (EP) to generate Merging Criterion (MC), and adopted a global mutual best-fitting strategy implemented through region adjacent graphs (RAG) and nearest neighbor graphs (NNG) to achieve this objective. The significant novelty of the proposed algorithm is the devise of CSVD, which largely reduce the influence of size. Based on both visual and quantitative evaluations, we demonstrated that the proposed algorithm was able to segment the objects properly regardless of their size. When compared with results from the commercial ecognition software, the proposed method better preserves the entirety of large objects, while also prevents small objects from mingling with other objects. It can strike a good balance when partitioning varied-size objects using one MC scale parameter. Additionally, in a quantitative comparison, the highest sum of the well-segmentation rate of small-, medium- and large-sized objects using the proposed algorithm reached 2.04 which was much higher than that of the ecognition segmentation (1.07) using one scale parameter. Besides, the proposed method improved the accuracy of boundary delineation. Finally, compared to a pure SVD based method, the proposed algorithm incurs less than 20 percent extra computational burden. Acknowledgment The authors specifically acknowledge the financial support through the National Key Technology R&D Program (Grant No. 2012BAH27B01) and the Program of International Science and Technology Cooperation (Grant No. 2011DFG72280). The authors would like to thank the anonymous referees for their contributing comments. Author Contributions The idea of this research was conceived by Bo Chen. The experiments were carried out by Bo Chen and Hongyue Du. The manuscript was written and revised by Bo Chen, Fang Qiu and Bingfang Wu. Conflicts of Interest The authors declare no conflict of interest. Reference 1. Cracknell, A.P. Synergy in remote sensing What s in a pixel? Int. J. Remote Sens. 1998, 19, Blaschke, T.; Strobl, J. What s wrong with pixels? Some recent developments interfacing remote sensing and GIS. GeoBIT/GIS 2001, 6,

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015 Object-Based Classification & ecognition Zutao Ouyang 11/17/2015 What is Object-Based Classification The object based image analysis approach delineates segments of homogeneous image areas (i.e., objects)

More information

HIGH RESOLUTION REMOTE SENSING IMAGE SEGMENTATION BASED ON GRAPH THEORY AND FRACTAL NET EVOLUTION APPROACH

HIGH RESOLUTION REMOTE SENSING IMAGE SEGMENTATION BASED ON GRAPH THEORY AND FRACTAL NET EVOLUTION APPROACH HIGH RESOLUTION REMOTE SENSING IMAGE SEGMENTATION BASED ON GRAPH THEORY AND FRACTAL NET EVOLUTION APPROACH Yi Yang, Haitao Li, Yanshun Han, Haiyan Gu Key Laboratory of Geo-informatics of State Bureau of

More information

Image Segmentation Based on Watershed and Edge Detection Techniques

Image Segmentation Based on Watershed and Edge Detection Techniques 0 The International Arab Journal of Information Technology, Vol., No., April 00 Image Segmentation Based on Watershed and Edge Detection Techniques Nassir Salman Computer Science Department, Zarqa Private

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment)

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment) Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment) Xiaodong Lu, Jin Yu, Yajie Li Master in Artificial Intelligence May 2004 Table of Contents 1 Introduction... 1 2 Edge-Preserving

More information

A NEW CLASSIFICATION METHOD FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGE BASED ON MAPPING MECHANISM

A NEW CLASSIFICATION METHOD FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGE BASED ON MAPPING MECHANISM Proceedings of the 4th GEOBIA, May 7-9, 2012 - Rio de Janeiro - Brazil. p.186 A NEW CLASSIFICATION METHOD FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGE BASED ON MAPPING MECHANISM Guizhou Wang a,b,c,1,

More information

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION Ruonan Li 1, Tianyi Zhang 1, Ruozheng Geng 1, Leiguang Wang 2, * 1 School of Forestry, Southwest Forestry

More information

GEOBIA for ArcGIS (presentation) Jacek Urbanski

GEOBIA for ArcGIS (presentation) Jacek Urbanski GEOBIA for ArcGIS (presentation) Jacek Urbanski INTEGRATION OF GEOBIA WITH GIS FOR SEMI-AUTOMATIC LAND COVER MAPPING FROM LANDSAT 8 IMAGERY Presented at 5th GEOBIA conference 21 24 May in Thessaloniki.

More information

SOME stereo image-matching methods require a user-selected

SOME stereo image-matching methods require a user-selected IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 2, APRIL 2006 207 Seed Point Selection Method for Triangle Constrained Image Matching Propagation Qing Zhu, Bo Wu, and Zhi-Xiang Xu Abstract In order

More information

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES Chang Yi 1 1,2,*, Yaozhong Pan 1, 2, Jinshui Zhang 1, 2 College of Resources Science and Technology, Beijing Normal University,

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

A Technique for Optimal Selection of Segmentation Scale Parameters for Object-oriented Classification of Urban Scenes

A Technique for Optimal Selection of Segmentation Scale Parameters for Object-oriented Classification of Urban Scenes A Technique for Optimal Selection of Segmentation Scale Parameters for Object-oriented Classification of Urban Scenes Guy Blanchard Ikokou, Julian Smit Geomatics Division, University of Cape Town, Rondebosch,

More information

Files Used in This Tutorial. Background. Feature Extraction with Example-Based Classification Tutorial

Files Used in This Tutorial. Background. Feature Extraction with Example-Based Classification Tutorial Feature Extraction with Example-Based Classification Tutorial In this tutorial, you will use Feature Extraction to extract rooftops from a multispectral QuickBird scene of a residential area in Boulder,

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

A reversible data hiding based on adaptive prediction technique and histogram shifting

A reversible data hiding based on adaptive prediction technique and histogram shifting A reversible data hiding based on adaptive prediction technique and histogram shifting Rui Liu, Rongrong Ni, Yao Zhao Institute of Information Science Beijing Jiaotong University E-mail: rrni@bjtu.edu.cn

More information

EE 701 ROBOT VISION. Segmentation

EE 701 ROBOT VISION. Segmentation EE 701 ROBOT VISION Regions and Image Segmentation Histogram-based Segmentation Automatic Thresholding K-means Clustering Spatial Coherence Merging and Splitting Graph Theoretic Segmentation Region Growing

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

A Robust Wipe Detection Algorithm

A Robust Wipe Detection Algorithm A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

Object Extraction Using Image Segmentation and Adaptive Constraint Propagation

Object Extraction Using Image Segmentation and Adaptive Constraint Propagation Object Extraction Using Image Segmentation and Adaptive Constraint Propagation 1 Rajeshwary Patel, 2 Swarndeep Saket 1 Student, 2 Assistant Professor 1 2 Department of Computer Engineering, 1 2 L. J. Institutes

More information

Review on Image Segmentation Techniques and its Types

Review on Image Segmentation Techniques and its Types 1 Review on Image Segmentation Techniques and its Types Ritu Sharma 1, Rajesh Sharma 2 Research Scholar 1 Assistant Professor 2 CT Group of Institutions, Jalandhar. 1 rits_243@yahoo.in, 2 rajeshsharma1234@gmail.com

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK Xiangyun HU, Zuxun ZHANG, Jianqing ZHANG Wuhan Technique University of Surveying and Mapping,

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG

Operators-Based on Second Derivative double derivative Laplacian operator Laplacian Operator Laplacian Of Gaussian (LOG) Operator LOG Operators-Based on Second Derivative The principle of edge detection based on double derivative is to detect only those points as edge points which possess local maxima in the gradient values. Laplacian

More information

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned

More information

6. Dicretization methods 6.1 The purpose of discretization

6. Dicretization methods 6.1 The purpose of discretization 6. Dicretization methods 6.1 The purpose of discretization Often data are given in the form of continuous values. If their number is huge, model building for such data can be difficult. Moreover, many

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION

A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION A TRAINED SEGMENTATION TECHNIQUE FOR OPTIMIZATION OF OBJECT- ORIENTED CLASSIFICATION Yun Zhang a and Travis Maxwell b a Department of Geodesy and Geomatics Engineering, University of New Brunswick, Fredericton,

More information

Segmentation

Segmentation Lecture 6: Segmentation 24--4 Robin Strand Centre for Image Analysis Dept. of IT Uppsala University Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION INTRODUCTION

A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION INTRODUCTION A FUZZY LOGIC APPROACH TO SUPERVISED SEGMENTATION FOR OBJECT- ORIENTED CLASSIFICATION Yun Zhang, PhD Department of Geodesy and Geomatics Engineering University of New Brunswick P.O. Box 4400, Fredericton,

More information

Adaptive Local Thresholding for Fluorescence Cell Micrographs

Adaptive Local Thresholding for Fluorescence Cell Micrographs TR-IIS-09-008 Adaptive Local Thresholding for Fluorescence Cell Micrographs Jyh-Ying Peng and Chun-Nan Hsu November 11, 2009 Technical Report No. TR-IIS-09-008 http://www.iis.sinica.edu.tw/page/library/lib/techreport/tr2009/tr09.html

More information

University of Florida CISE department Gator Engineering. Clustering Part 4

University of Florida CISE department Gator Engineering. Clustering Part 4 Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Segmentation

Segmentation Lecture 6: Segmentation 215-13-11 Filip Malmberg Centre for Image Analysis Uppsala University 2 Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Image Segmentation for Image Object Extraction

Image Segmentation for Image Object Extraction Image Segmentation for Image Object Extraction Rohit Kamble, Keshav Kaul # Computer Department, Vishwakarma Institute of Information Technology, Pune kamble.rohit@hotmail.com, kaul.keshav@gmail.com ABSTRACT

More information

Processing and Others. Xiaojun Qi -- REU Site Program in CVMA

Processing and Others. Xiaojun Qi -- REU Site Program in CVMA Advanced Digital Image Processing and Others Xiaojun Qi -- REU Site Program in CVMA (0 Summer) Segmentation Outline Strategies and Data Structures Overview of Algorithms Region Splitting Region Merging

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Part 3: Image Processing

Part 3: Image Processing Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

K-Means Clustering Using Localized Histogram Analysis

K-Means Clustering Using Localized Histogram Analysis K-Means Clustering Using Localized Histogram Analysis Michael Bryson University of South Carolina, Department of Computer Science Columbia, SC brysonm@cse.sc.edu Abstract. The first step required for many

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Figure 1: Workflow of object-based classification

Figure 1: Workflow of object-based classification Technical Specifications Object Analyst Object Analyst is an add-on package for Geomatica that provides tools for segmentation, classification, and feature extraction. Object Analyst includes an all-in-one

More information

MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH INTRODUCTION

MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH INTRODUCTION MAP ACCURACY ASSESSMENT ISSUES WHEN USING AN OBJECT-ORIENTED APPROACH Meghan Graham MacLean, PhD Candidate Dr. Russell G. Congalton, Professor Department of Natural Resources & the Environment University

More information

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article Available online www.jocpr.com Journal of Chemical and Pharmaceutical esearch, 015, 7(3):175-179 esearch Article ISSN : 0975-7384 CODEN(USA) : JCPC5 Thread image processing technology research based on

More information

MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY INTRODUCTION

MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY INTRODUCTION MSEG: A GENERIC REGION-BASED MULTI-SCALE IMAGE SEGMENTATION ALGORITHM FOR REMOTE SENSING IMAGERY Angelos Tzotsos, PhD Student Demetre Argialas, Professor Remote Sensing Laboratory National Technical University

More information

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION Guifeng Zhang, Zhaocong Wu, lina Yi School of remote sensing and information engineering, Wuhan University,

More information

Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis with Lidar and RGB Imagery

Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis with Lidar and RGB Imagery Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis with Lidar and RGB Imagery Victoria Price Version 1, April 16 2015 Submerged Aquatic Vegetation Mapping using Object-Based Image Analysis

More information

Region Based Image Fusion Using SVM

Region Based Image Fusion Using SVM Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Time Stamp Detection and Recognition in Video Frames

Time Stamp Detection and Recognition in Video Frames Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th

More information

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method Ryu Hyunki, Lee HaengSuk Kyungpook Research Institute of Vehicle Embedded Tech. 97-70, Myeongsan-gil, YeongCheon,

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Still Image Objective Segmentation Evaluation using Ground Truth

Still Image Objective Segmentation Evaluation using Ground Truth 5th COST 276 Workshop (2003), pp. 9 14 B. Kovář, J. Přikryl, and M. Vlček (Editors) Still Image Objective Segmentation Evaluation using Ground Truth V. Mezaris, 1,2 I. Kompatsiaris 2 andm.g.strintzis 1,2

More information

Automatic Grayscale Classification using Histogram Clustering for Active Contour Models

Automatic Grayscale Classification using Histogram Clustering for Active Contour Models Research Article International Journal of Current Engineering and Technology ISSN 2277-4106 2013 INPRESSCO. All Rights Reserved. Available at http://inpressco.com/category/ijcet Automatic Grayscale Classification

More information

A Method of weld Edge Extraction in the X-ray Linear Diode Arrays. Real-time imaging

A Method of weld Edge Extraction in the X-ray Linear Diode Arrays. Real-time imaging 17th World Conference on Nondestructive Testing, 25-28 Oct 2008, Shanghai, China A Method of weld Edge Extraction in the X-ray Linear Diode Arrays Real-time imaging Guang CHEN, Keqin DING, Lihong LIANG

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images Sensors & Transducers 04 by IFSA Publishing, S. L. http://www.sensorsportal.com Cluster Validity ification Approaches Based on Geometric Probability and Application in the ification of Remotely Sensed

More information

An Efficient Character Segmentation Based on VNP Algorithm

An Efficient Character Segmentation Based on VNP Algorithm Research Journal of Applied Sciences, Engineering and Technology 4(24): 5438-5442, 2012 ISSN: 2040-7467 Maxwell Scientific organization, 2012 Submitted: March 18, 2012 Accepted: April 14, 2012 Published:

More information

Idea. Found boundaries between regions (edges) Didn t return the actual region

Idea. Found boundaries between regions (edges) Didn t return the actual region Region Segmentation Idea Edge detection Found boundaries between regions (edges) Didn t return the actual region Segmentation Partition image into regions find regions based on similar pixel intensities,

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD Shangshu Cai 1,, Wuming Zhang 1,, Jianbo Qi 1,, Peng Wan 1,, Jie Shao 1,, Aojie Shen 1, 1 State Key Laboratory

More information

2D image segmentation based on spatial coherence

2D image segmentation based on spatial coherence 2D image segmentation based on spatial coherence Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics

More information

CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS

CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS 85 CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS 5.1 GENERAL Urban feature mapping is one of the important component for the planning, managing and monitoring the rapid urbanized growth. The present conventional

More information

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT Fan ZHANG*, Xianfeng HUANG, Xiaoguang CHENG, Deren LI State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing,

More information

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters Appl. Math. Inf. Sci. 6 No. 1S pp. 105S-109S (2012) Applied Mathematics & Information Sciences An International Journal @ 2012 NSP Natural Sciences Publishing Cor. Two Algorithms of Image Segmentation

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

A Supervised and Fuzzy-based Approach to Determine Optimal Multi-resolution Image Segmentation Parameters

A Supervised and Fuzzy-based Approach to Determine Optimal Multi-resolution Image Segmentation Parameters 11-060.qxd 9/14/12 7:35 PM Page 1029 A Supervised and Fuzzy-based Approach to Determine Optimal Multi-resolution Image Segmentation Parameters Hengjian Tong, Travis Maxwell, Yun Zhang, and Vivek Dey Abstract

More information

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Improving the Efficiency of Fast Using Semantic Similarity Algorithm International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014 1 Improving the Efficiency of Fast Using Semantic Similarity Algorithm D.KARTHIKA 1, S. DIVAKAR 2 Final year

More information

Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation

Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation International Journal of Electrical and Computer Engineering (IJECE) Vol. 7, No. 6, December 2017, pp. 3402~3410 ISSN: 2088-8708, DOI: 10.11591/ijece.v7i6.pp3402-3410 3402 Fuzzy Region Merging Using Fuzzy

More information

Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model

Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model This paper appears in: IEEE International Conference on Systems, Man and Cybernetics, 1999 Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model

More information

STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES

STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES 25-29 JATIT. All rights reserved. STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES DR.S.V.KASMIR RAJA, 2 A.SHAIK ABDUL KHADIR, 3 DR.S.S.RIAZ AHAMED. Dean (Research),

More information

Unsupervised Learning

Unsupervised Learning Outline Unsupervised Learning Basic concepts K-means algorithm Representation of clusters Hierarchical clustering Distance functions Which clustering algorithm to use? NN Supervised learning vs. unsupervised

More information

Image Segmentation. Srikumar Ramalingam School of Computing University of Utah. Slides borrowed from Ross Whitaker

Image Segmentation. Srikumar Ramalingam School of Computing University of Utah. Slides borrowed from Ross Whitaker Image Segmentation Srikumar Ramalingam School of Computing University of Utah Slides borrowed from Ross Whitaker Segmentation Semantic Segmentation Indoor layout estimation What is Segmentation? Partitioning

More information

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping IMAGINE ive The Future of Feature Extraction, Update & Change Mapping IMAGINE ive provides object based multi-scale image classification and feature extraction capabilities to reliably build and maintain

More information

Linear Feature Extraction from Scanned Color

Linear Feature Extraction from Scanned Color Linear Feature Extraction from Scanned Color Maps with Complex Background Yun Yang, Xiao-ya An State Key Laboratory of Geo-Information Engineering, Xi an, 710054, China Xi an Research Institute of Surveying

More information

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

More information

Saliency Detection in Aerial Imagery

Saliency Detection in Aerial Imagery Saliency Detection in Aerial Imagery using Multi-scale SLIC Segmentation Samir Sahli 1, Daniel A. Lavigne 2 and Yunlong Sheng 1 1- COPL, Image Science group, Laval University, Quebec, Canada 2- Defence

More information

identified and grouped together.

identified and grouped together. Segmentation ti of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is

More information

Statistical Region Merging Algorithm for Segmenting Very High Resolution Images

Statistical Region Merging Algorithm for Segmenting Very High Resolution Images Statistical Region Merging Algorithm for Segmenting Very High Resolution Images S.Madhavi PG Scholar, Dept of ECE, G.Pulla Reddy Engineering College (Autonomous), Kurnool, Andhra Pradesh. T.Swathi, M.Tech

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

Semi-Automatic Building Extraction from High Resolution Imagery Based on Segmentation

Semi-Automatic Building Extraction from High Resolution Imagery Based on Segmentation Semi-Automatic Building Extraction from High Resolution Imagery Based on Segmentation N. Jiang a, b,*, J.X. Zhang a, H.T. Li a a,c, X.G. Lin a Chinese Academy of Surveying and Mapping, Beijing 100039,

More information

One category of visual tracking. Computer Science SURJ. Michael Fischer

One category of visual tracking. Computer Science SURJ. Michael Fischer Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Li X.C.,, Chui C. K.,, and Ong S. H.,* Dept. of Electrical and Computer Engineering Dept. of Mechanical Engineering, National

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

CHANGE DETECTION OF LINEAR MAN-MADE OBJECTS USING FEATURE EXTRACTION TECHNIQUE

CHANGE DETECTION OF LINEAR MAN-MADE OBJECTS USING FEATURE EXTRACTION TECHNIQUE CHANGE DETECTION OF LINEAR MAN-MADE OBJECTS USING FEATURE EXTRACTION TECHNIQUE S. M. Phalke and I. Couloigner Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada T2N 1N4

More information

Parameter Optimization in Multi-scale Segmentation of High Resolution Remotely Sensed Image and Its Application in Object-oriented Classification

Parameter Optimization in Multi-scale Segmentation of High Resolution Remotely Sensed Image and Its Application in Object-oriented Classification Parameter Optimization in Multi-scale Segmentation of High Resolution Remotely Sensed Image and Its Application in Object-oriented Classification Lan Zheng College of Geography Fujian Normal University

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE

URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE P. Li, H. Xu, S. Li Institute of Remote Sensing and GIS, School of Earth and Space Sciences, Peking

More information

6. Object Identification L AK S H M O U. E D U

6. Object Identification L AK S H M O U. E D U 6. Object Identification L AK S H M AN @ O U. E D U Objects Information extracted from spatial grids often need to be associated with objects not just an individual pixel Group of pixels that form a real-world

More information

Image Segmentation Techniques

Image Segmentation Techniques A Study On Image Segmentation Techniques Palwinder Singh 1, Amarbir Singh 2 1,2 Department of Computer Science, GNDU Amritsar Abstract Image segmentation is very important step of image analysis which

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information