Multi-focus image fusion based on block matching in 3D transform domain

Size: px
Start display at page:

Download "Multi-focus image fusion based on block matching in 3D transform domain"

Transcription

1 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018, pp Multi-focus image fusion based on block matching in 3D transform domain YANG Dongsheng 1,2, HU Shaohai 1,2,*, LIU Shuaiqi 3, MA Xiaole 1,2, and SUN Yuchao 4 1. Institute of Information Science, Beijing Jiaotong University, Beijing , China; 2. Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing , China; 3. College of Electronic and Information Engineering, Hebei University, Baoding , China; 4. The Third Research Institute, China Electronics Technology Group Corporation, Beijing , China Abstract: Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However, most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image. This paper presents a fusion framework based on block-matching and 3D (BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low- and high- coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform (NSCT), non-subsampled Shearlet transform (NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods. Keywords: image fusion, block matching, 3D transform, blockmatching and 3D (BM3D), non-subsampled Shearlet transform (NSST). DOI: /JSEE Introduction Image fusion [1] refers to the process that obtains a new single synthesized image from two or more images. The final fused image could provide more comprehensive, ac- Manuscript received April 05, *Corresponding author. This work was supported by the National Natural Science Foundation of China ( ; ), the Fundamental Research Funds for the Central Universities (2016YJS039), the Natural Science Foundation of Hebei Province (F ; F ), the Natural Social Foundation of Hebei Province (HB15TQ015), the Science Research Project of Hebei Province (QN ; ZC ), and the Natural Science Foundation of Hebei University ( ). curate and reliable image description, which is widely used in other image processing or computer vision fields. Pixellevel fusion methods can be broadly classified into two groups [2] spatial domain and transform domain fusion. Currently, the most frequently used methods are based on multi-scale transforms where fusion is performed on several different scales and directions independently. The most typical transform would be the discrete wavelet transform (DWT) [3] which is widely used because of its favorable time-frequency characteristics. After DWT, a series of improved multi-scale transforms has taken to the stage, e.g., Ridgelet [4], Curvelet [5], and Contourlet [6]. Among these transforms, non-subsampled Contourlet transform (NSCT) [7,8] has been widely adopted owing to its multiresolution and multi-directional properties. With the introduction of non-subsampled Shearlet transform (NSST) [9], there is no more limit on the number of decomposition directions, thus both the effectiveness and efficiency of image fusion have been enhanced. Usually, the transform domain fusion framework is relatively fixed. The first step is the multi-scale transform. The coefficients got from the first step can be divided into low frequency component and several high frequency components. These components are fused by different fusion rules because the low and high-frequency components represent approximate and detailed information respectively. The final fused image is constructed by the inverse transform with all composite coefficients. With the introduction of more transform domain methods, the fusion framework has been greatly enriched. For example, the integrity, hue and saturation (IHS) transform frame was widely introduced in fusion frameworks to achieve color image fusion [10]; or combining the pulse coupled neural network (PCNN) structure during the selection of coefficients [11]. However, in these framework, source images are directly transformed into the transform

2 416 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 domain, which lead to the loss of specific characteristics in the spatial domain, like edge contours, and spatial similarity. Since spatial information cannot be further used, it usually causes distortion or artificial textures in fused images. The proposed framework s improvement mainly shows in two aspects. First, prior of transforming it adds some spatial domain pre-processing steps, like blocking and grouping. Second, it changes the 2D multi-scale geometric transform into a new type of 3D transform. Other steps are basically the same with the existing framework, except an aggregation procedure after inverse transform. This structure constitutes a partially similar structure of an image de-noising algorithm block-matching and 3D filtering (BM3D) [12,13]. So far, the BM3D algorithm is one of the most excellent image de-noising algorithms. It has been widely used in image and video noise reduction. The main reason for the excellent effect of BM3D is that it makes good use of the similarity of the noise signal in different similar blocks. Since it could achieve a better separation of the noise signal in the similar region by matching and grouping image blocks, this algorithm provides better performance than traditional ones. Thus, the spatial domain information can assist transform domain processes to achieve outstanding performance. With the BM3D-like algorithm structure, i.e., before switching to the transform domain, introducing blocking and grouping steps, the similarity in spatial domain can be fully taken advantage of. The transform adopted here are: 3D transform with 2D direct cosine transform (DCT) (BMDCT), 3D transform with 2D DWT (BMDWT), 3D transform with 2D NSCT (BMNSCT), 3D transform with 2D NSST (BMNSST). As for fusion rules, it mainly adopts max and mean, image region energy [14] and chose max intra-scale [15]. This paper is organized as follows. In Section 2, we introduce the proposed fusion framework in detail. The specific procedures of blocking, matching and grouping can be found in Section 3 and the method of 3D transform and other transform domain process is given in Section 4. Experimental results and analysis are presented in Section 5. Finally, Section 6 contains relevant conclusions. 2. Fusion framework Spatial and transform domain techniques are the two-major pixel-level techniques. In terms of the structure, spatial domain techniques are usually haphazard and changeful, because spatial domain methods usually perform by combining input images in a linear or non-linear fashion using weighted average or variance based algorithms [16]. However, the structure of most transform domain methods is relatively fixed, because much of the innovation happens in transform domain, where the actual fusion takes place. The main motivation of moving into the transform domain is to work within a framework, where the salient features are more clearly depicted. 2.1 Improvement of existing frameworks A typical transform domain fusion framework can be described as Fig. 1. Both the source images A and B are first transformed to a transform domain (e.g., DWT domain). Then, each of the input images can be divided into a lowfrequency coefficient and a series of high-frequency coefficients. The fused low- and high-frequency components are gained from their coefficients in both images through different fusion rules. The final fused image F is received by taking the inverse transform of the composite representation. Fig. 1 Block diagram of a typical fusion framework of existing transform domain image fusion algorithms Such a transform domain fusion framework has been widely used. In most cases, the innovations can mainly be reflected in two aspects: one is the innovation of transform methods, which means replacing the transform; the other is the creation of new fusion rules. However, transform coefficients are operated by fusion rules directly, which easily

3 YANG Dongsheng et al.: Multi-focus image fusion based on block matching in 3D transform domain 417 leads to the production of distortion and artificial texture. It is worth noting that these problems are not only a matter of multi-scale transform itself, but also the deficiencies of not making use of spatial information. Thus, the improvement should not only focus on modifying transforms, but also introducing more spatial features. By using the similarity in spatial domain, e.g., block distance, we could group some image patches with salient similar features into a series of 3D arrays. Therefore, the process in transform domain can have more specific and suitable options on function parameters. These improvements enhance the effect of each block, thus achieve the promotion of the overall effect. The proposed framework can be seen in Fig. 2. The main improvements are: blocking the input images A and B by a fixed sliding window; setting a fixed searching area based on the current block and matching them by their similarities; grouping the chosen blocks and arranging them to a 3D array; transforming the 3D arrays above into transform domain by using a 3D transform. After the fusion in transform domain, the coefficients are transformed back by the inverse 3D transform; separating the 3D array and putting the image blocks to their original positions; the overlapped pixels need to be recalculated by the aggregation algorithm to get the fused image F. Fig. 2 Block diagram of the proposed image fusion framework 2.2 Comparing with BM3D As mentioned previously, the idea of BM3D is referenced in the proposed framework. Before gathering coefficients in the transform domain, an added procedure of BM3D is used. Analogously, the aggregation step is adopted after inverse transform. The procedures of these structures have been boxed by blue dashed lines in Fig. 2. Compared with the BM3D in image de-noising, there are a lot of differences within the proposed framework. The BM3D algorithm can be divided into two sections. The first is the basic estimate. The input noisy image is processed by successively extracting reference blocks from it. And for each block, the algorithm finds their similar one and stack them together to form a 3D array [13]. After 3D transform, a threshold is used to help reduce noise. The second step is the final estimate. The previous result is grouped again by the same process. Then, apply 3D transform on both groups (one from the noisy image, the other from the basic estimate) and perform Wiener filtering on the noisy one using the energy spectrum of the basic estimate as the true (pilot) energy spectrum [12]. Finally, all the blocks obtained after the inverse transform need to be returned to their original positions. From above description, the overall structure of the two estimate steps in BM3D is basically the same. Thus, there is no need to adopt a second step in image fusion. Besides, as the process is different between image fusion and denoising, the Wiener filter also is not useful for current coefficients. For further optimizing coefficients, a method of threshold shrinkage is adopted here. In the subsequent sections, the spatial domain processing, 3D transform and aggregation will all be introduced in detail. 3. Spatial domain processes The proposed framework can be divided into two parts: the spatial domain section and the transform domain one, besides, the spatial domain process can also fall into blocking and grouping. For blocking, a method by using the sliding window is adopted. For grouping, it means stacking the 2D block with high similarity together and forming them into 3D arrays. The measurement of similarity is achieved through calculating block distance. Such steps switch the 2D images into 3D image arrays, and it is a kind of potential similarity (correlation, affinity, etc.) across the blocks which are used in the arrays. In this way, a better estimate of the distinct image can be obtained through the data with this potential relevance. The approach that groups low-dimensional data into high-

4 418 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 dimensional data sets enables the use of high-dimensional filtering to process these data, hence it is defined as collaborative filtering [12]. 3.1 Blocking and matching Accordingto a certain windowsize andfixed sliding step, a series of image blocks can be obtained. Then we filter out some of the blocks within the search area in accordance with a pre-selected searching rule and threshold set. The applied algorithm process for each image block is illustrated in Fig. 3(a). The white boxed area which is enlarged on the right is the search area that uses the reference block (marked R) as the center, and the similar blocks (marked S) are pointed out by black dash arrows. The block matching process is as follows: (i) Select the current block as reference one; (ii) Draw a fixed search area centered on the reference block (for pixels image blocks, the set of an pixels search area is reasonable); (iii) For each image block contained, we denominate them as candidate blocks, and calculate the distance metric between each of the candidate and reference blocks; (iv) List the distance of all blocks in the region in a descending order, and the least one is defined as the most similar one; (v) Compare the distance of blocks with a pre-set threshold, and all blocks less than the threshold are defined as similar; (vi) Arrange these similar blocks into an array sorted by their similarity. Fig. 3 Schematic diagram of the procedures in spatial domain 3.2 Grouping by similarity To better reflect the similarity, a typical regulation is using the inverse of some distance-measure. For image blocks, the smaller the distance between the reference blocks, the more similar the blocks are. Typical distance measures are norms, like the Euclidean distance (p = 2) [17], the p- norm used in denoising, the l p -norm in different signal fragments, and the Kullback-Leibler distance used in texture detection [18]. In fact, a similar block choosing approach is diverse, so it can be considered as a clustering or classification approach. There are a series of literature systematically introducing many classic methods, e.g., K-means clustering [19], fuzzy clustering [20], vector quantization [21]. For these classification approaches, their classification result is no cross terms, which is because their idea of classification is based on segmentation or partitioning. In other words, one block can only belong to a specific group. To construct such two disjoint groups which own many elements with high mutual similarity, the conventional method requires a lot of recursive computation cycles, which needs vast computing power. Moreover, such a screening method will lead to the inequality of the distribution of fragments, this is because the fragment close to the centroid will be more similar than the farther one. Such is often the case, even under the exceptional circumstance that all fragments are equidistantly distributed. The proposed matching method can be implemented to the intersection-contained classification of mutually similar signal fragments. This is done by the pairwise test of the similarity between the reference and candidate blocks. In such classification, the choice of similarity can be regarded as a classification function, and the chosen reference block refers to the centroid of the group. Thereby, the approach avoids the problem of disjoint groups. The grouping and matching process can be seen in Fig. 3(b), which shows, to complete the work of the whole image, we need to traverse all the blocks by using the same process as the reference one. Besides, each block needs to be used as the reference block respectively to find their similar blocks. 3.3 Similarity measurement As mentioned above, the proposed framework adopts l p - norm as the similarity measurement. Here, two input images A, B are processed by the same step in spatial domain,

5 YANG Dongsheng et al.: Multi-focus image fusion based on block matching in 3D transform domain 419 therefore, we only use A as an example for illustration, and the final fused image is denoted by F Modeling and notation For image A, we denote x as a set of 2D spatial coordinate and its value belongs to 2D image domain X Z 2. Thus, for any fixed size N N block split out from A, it can be expressed as A x,wherex represents the coordinate of the top-left corner of the image block, i.e., A x is the image block of the image A which is adhered at the location of x. For image block groups, the form can be represented by a set, which is denoted by a bold-face capital letter with a subscript to express the set of all coordinates in the group. For example, A S represents a 3D array which is composed of a plenty of A x, here the position x S X. In addition, we define d as the calculated distance measure between blocks. And for distinguishing different parameter selections, we use the superscripts ideal to denote the distance in an ideal condition like d ideal and real for practical situation, e.g., d real Block distance As introduced in section 3.1, the block-distance (dissimilarity) is a pairwise calculation of reference and candidate blocks. Thus, we define A xr as the reference block, A xc as the currently selected candidate block, here, x R X, x C X. The dissimilarity between blocks is determined by the given reference block and a fixed threshold, which means the block is deemed to be similar when the distance with the reference block is smaller than the threshold. The distance is obtained through l 2 -norm calculation between the blocks. In an ideal situation, the block-distance of the input defocus-image A should be determined by the corresponding blocks in the true-image T,thatisA = T. Therefore, it can be calculated as d ideal (A xr, A xc )= T x R T xc 2 2 N 2 (1) where 2 means the l 2 -norm, T xr and T xc denote the blocks at the corresponding location in the true-image of the reference and candidate block respectively, here x X in T. Obviously, the true image T is unavailable. And as the best estimate of T, the fused image F = T is also unpredictable. Therefore, the distance can only be obtained by A xr and A xc themselves, as d real (A xr, A xc )= A x R A xc 2 2 N 2. (2) However, such calculation does not consider the difference between the ideal and the reality, if the gap between d ideal and d real does not exceed the threshold set range, it will not affect the grouping result, but if the difference exceeds the boundary, that means grouping error. In practical situation, a too small block size or sliding step, or the chosen search area is exactly the defocus area, these all will cause difference between d ideal and d real. In this case, the block will still be matched as similar because the real distance is smaller than the threshold, however, the distance in true-image has already crossed the threshold. Analogously, it is also possible that the block be excluded as dissimilar though the ideal distance is smaller than the threshold. To address this, we employ a coarse 2D linear prefilter [12] to preprocess the two original blocks. Such a prefiltering is applying a normalized 2D linear transform on both blocks, and then the threshold is used on the obtained coefficients. This approach relatively reduces the false positives, and the final distance can be calculated as d(a xr, A xc )= f 2D(A xr ) f 2D (A xc ) 2 2 N 2 (3) where f 2D ( ) represents the function of the 2D linear filter. As mentioned before, the results calculated by d- distance (3) is presented in the form of a set. Therefore, the set of all the coordinates x of the similar blocks of the reference block A xr can be expressed as S xr = {x X : d(a xr, A xc ) τ max } (4) where τ max is a threshold that represents the maximum d- distance of two blocks which are considered as similar. The selection of τ max is based on the acceptable value of the ideal difference of the natural images. Since the reference block itself is also in the search area, so we have d(a xr, A xr ) = 0, i.e., for each reference block A xr, it has at least one similar block (i.e., itself), S xr will not be empty. After obtaining coordinates set S xr, we can use similar blocks A x A S,x S xr to form a 3D array of size N N N S, denoted as A SxR, where N S denotes the number of similar blocks. After that, we can obtain a collection of 3D arrays, as shown in Fig. 3(b). The length of each 3D array in the collection is not a fixed value, but is decided by the number of the similar blocks N S Block-matching effect In practical application, for pixel resolution natural images, it would be suitable to set the sliding window between 8 8 to pixel, so the blocks can contain more local edge features. Besides, to reduce blocking effects, the step length of the sliding window is typically

6 420 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 less than the window size. Fig. 4 shows the selection of similar blocks. In each image, the red translucent square means reference block, other green translucent squares represent similar blocks found in the search area. The window size of the first line is pixel and the second line is 8 8 pixel. Fig. 4 Illustration of the selection of similar blocks in natural images It can be found that similar details exist extensively among natural images, which is in the form of small edge segments. In addition, the similar blocks are scattered around the same focal plane or the junction of different focal planes. This can assist the subsequent algorithm to further integrate information, thus optimizing the fusion effect. The selection of similar blocks between multi-focus image groups can be seen in Fig.5. For the two groups of images (Clock & Pepsi), we search similar blocks on both the focus position of one image and the corresponding position of the defocus image. The first line of the illustration selects the focus position and the second line stands for its defocused image. All the images have already gone through a rigorous registration process. The window size of the left image of each group is 8 8 pixel and the right one is pixel. Fig. 5 Illustration of the selection of similar blocks in multi-focus image groups As can be seen, the selection of similar blocks is approximately the same between the focus region of one image and the defocus region of its counterpart. Besides, since the similarity measures are much closer, the defocus image usually has more similar blocks. Therefore, whether the current group represents the focus region can be determined by comparing the number of similar blocks of this group and its counterpart. More details can be obtained by using this as an instruction for subsequent works, especially for the fusion rules design.

7 YANG Dongsheng et al.: Multi-focus image fusion based on block matching in 3D transform domain Transform domain processes After the grouping step, we can get the transform coefficients by a 3D transform. Then high- and low-frequency components are processed in different rules, since they indicate details and approximation information respectively. Since there will be overlap, if we return fused blocks to their original position, an aggregation process is used to calculate the final pixel value of each position to avoid this D transform The 3D transform is a combination of 2D and 1D transforms, that is, for each 2D image block in the 3D array we adopt a traditional 2D transform, followed by a 1D transform on each column of this array (i.e. the third dimension). This paper uses several 2D transforms to present a comparative analysis. For the 1D transform, we adopt the DCT, since it could reduce significant coefficients Theory of 3D transform Collaborative filtering, as mentioned in Section 3, is very effective for multi-focus images, because of the use of spatial correlation in the filtering and the creation of sparsity by the shrinkage after transform. These processes reduce the uncertainty of the fused image, and create the possibility of optimizing the result. The correlation here means the correlation within the single image block (i.e. intra-block correlation) and within the whole group (i.e. intra-group correlation). The intrablock correlation refers to the connection between different pixel values in one block. The intra-group correlation shows the similarity relevance of blocks and their corresponding spatial regions. The reason for adding a 1D transform in the third dimension is to further optimize the coefficients. For the n blocks in one group, if we use 2D transform, there will be nλ similar coefficients obtained, where λ denotes the number of coefficients of one block. Such a method is not only of minor efficiency but also does not use the intragroup similarity in transform domain. If we add a 1D transform between the transformed blocks (i.e. applying a 1D transform on each column of pixels in the same position of many blocks), there will be only λ significant coefficients approximately representing the results of the entire group. For the coefficients after the 3D transform, there should be a shrinkage before the fusion rules. To facilitate subsequent calculations, we use a hard-threshold operator to rapidly filter out significant values. For the 3D transform of one grouped array, the process can be divided into a 2D transform T 2D ( ) which is followed by a 1D transform T 1D ( ) across all the blocks. The process can be presented by Fig. 6, wherein the redcross arrow marks the unfolding surfaces of the 2D transform and the one-way arrow indicates the direction of the 1D transform. Fig. 6 Schematic diagram of 3D transform Since the set of grouped blocks can be denoted as A SxR, its 3D transform coefficients A 3D S xr can be expressed as A 3D S xr = T 3D (A SxR )=T 1D (A 2D S xr ) (5) where T 3D ( ) represents the 3D transform and AS 2D xr denotes the set of coefficients after the 2D transform, which means A 2D S xr = {A x x S}. (6) The decorated letter A x means the 2D transform coefficients of the intra-group block A x, that is A x = T 2D (A x ). In transform domain, the first step is threshold shrinkage, the coefficients are processed through a set hard threshold filter f ht ( ), and then for two groups of coefficients, we use different fusion rules to filter the high- and low-frequency components respectively. Generally, the hard threshold filter may be defined as { τ, τ >τht f ht (τ,τ ht )= (7) 0, otherwise where τ represents the current input coefficient, τ ht is the fixed threshold parameter. Then transform coefficients used for fusion rules can be expressed as à 3D S xr = f ht (A 3D 3D S xr,τ ht ), B S xr = f ht (B 3D S xr,τ ht ) (8) where à and B denote the coefficients after the hard threshold operation. To achieve a higher pulse signal to noise ratio (PSNR) and do not affect image clarity, according to practical applications in [12,13], for 256 gray images the value of τ ht we use NSST For 2D transform, we compare several widely used transforms, including 2D-DCT, DWT and NSCT in the existing

8 422 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 fusion, as well as the most effective and efficient transform NSST which is mainly used in this paper. The following is a brief introduction of it. The NSST is constructed through affine systems with composite dilations, when the dimension n =2,theaffine systems can be defined as follows: M DS (ψ) ={ψ j,l,k (x) = det D j 2 ψ(s i D j x k) : j, l Z,k Z 2 } (9) where ψ L 2 (R 2 ), D, S are both 2 2 invertible matrices and det S =1. The matrix D is known as the dilations matrix, while S standsfor the shear matrix. If f,ψ j,l,k 2 = f 2 is met with any f i,j,k L 2 (R 2 ), it implies that M DS (ψ) forms the tight frame, which means it is compactly supported, then the elements of M DS (ψ) are called composite wavelets. We call the elements of M DS (ψ) Shearlets, only when the values of D, S are defined as follows: D = [ ] a 0 0 a 1/2, S = [ ] 1 s. (10) 0 1 [ Usually ] we [ use a ] = 4,s = 1, that is D = , S = The discretization of NSST consists of two phases including multi-scale and multi-direction decomposition. For multi-scale decomposition, NSST adopts the nonsubsampled pyramid (NSP) as the decomposition approach. By using NSP, one low-frequency sub-band image and k high-frequency sub-band images can be obtained from the original source image, through k levels decomposition, in which each level can decompose out both a low- and a high-frequency sub-image, and every subsequent decomposition takes place on the low-frequency sub-image of the up one level iteratively. The NSST decomposition process is illustrated in Fig. 7 where SF is the abbreviation for shearing filter. For multi-direction decomposition, it is realized through a modified SF in NSST. Roughly speaking, the conventional SF is realized by translating the window function on the pseudo-polar grid, while the non-subsampled SF maps the pseudo-polar grid back to Cartesian grid system, so the entire process can be directly completed through the 2D convolution. The support zones of NSST is a pair of trapeziform zones with the size of 2 2j 2 j,whichisshown in Fig. 8. Fig. 7 Schematic diagram of multi-scale decomposition of NSST Fig Fusion rules Trapeziform frequency support zones of an SF Since the paper focuses on the study of the fusion framework, we only try some common fusion rules to integrate them with the entire framework. Besides, we do not make any in-depth discussion of the influence by using different fusion rules. As described before, the main purpose of the 1D column transform is to utilize correlation, to reduce significant coefficients and to facilitate calculation, it does not destroy the positional distribution of the different frequency components which are obtained by the previous 2D scaling transform. The fusion takes place on the 2D surfaces of the 3D array. Hence, the high- and low-frequency components and fusion rules described subsequently still aim at 2D surfaces. In addition, because the numbers of similar blocks between the corresponding reference blocks in two images are not always equal, the smaller one is used as the final number of the fused blocks. That is, if there are n blocks in A SxR and n + m blocks in B SxR, then we only take the first n blocks in B SxR to use. As mentioned in section 3.3.3, the defocus area usually has more similar blocks, therefore, choosing the smaller one can provide more focus information for the rules High frequency fusion rules High-frequency coefficients usually contain salient fea-

9 YANG Dongsheng et al.: Multi-focus image fusion based on block matching in 3D transform domain 423 tures, such as contours and edges. Therefore, the higher the high-frequency coefficients value is, the more decisive it represents the region s change. For high frequency fusion rules, the basic one is choose max (CM), which selects a larger absolute value as the result. Besides, another rule improved from CM is choose max by intra-scale grouping (CMIS) [15], and it introduces a rule across different decomposition levels. (i) CM The CM rule is selecting higher energy coefficients as the fused decomposed representation. Accordingly, for the fused coefficient F x in the position (i, j), which is in the lth decomposition level and the kth sub-band, it can be represented as F l,k x (i, j) = { A l,k x (i, j), Al,k x x (i, j), otherwise B l,k (i, j) > Bl,k x (i, j) (11) where A l,k x and B l,k x denote the magnitude coefficients of their input block respectively. (ii) CMIS Since each high-frequency coefficient has correlation with others in different scales and directions, the simple CM rule cannot be well combined with the multi-scale decomposition. Therefore, for all the high-frequency coefficients at different scales and directions, they all should be compared in a composite result, that is F l,k x (i, j)= A l,k x (i, j), B l,k K K A l x(i, j) > B l x(i, j) k=1 k=1 x (i, j), otherwise (12) where the judgment condition is the summation of k subbands coefficients, so it connects each decomposition level and direction to determine the fused coefficients Low frequency fusion rules The low-frequency coefficient fusion uses two kinds of fusion rules, one is the simple averaging rule, and the other is an effective rule based on region energy. (i) Averaging The low-frequency coefficient reflects the background information. Therefore, it may not be able to obtain the significant salient feature, even though we use a high-pass requirement. Hence, we usually use averaging operation: F x (i, j) = A x(i, j)+b x (i, j) (13) 2 where A x and B x denote the approximation coefficients of the input blocks respectively. (ii) Region energy For the low-frequency component, if we only take some algebraic method like average, it is easy to lose some approximate information and cause a larger gray difference [22]. Therefore, we adopt the fusion rule based on region energy [14]. The low-frequency sub image of each image block is subdivided again into several 3 3 or 5 5 pixels regions and then its region energy is calculated. The region energy centers on coordinate (i, j) can be expressed as E n (i, j), wheren represents coefficient A x or B x.the formula could be N N E F = F x (i, j) 2 (14) i=1 j=1 where N is the number of ranks of the region. Therefore, the fusion rule is { Ax (i, j), E F x (i, j) = Ax (i, j) E Bx (i, j) (15) B x (i, j), otherwise where F x represents the fused low-frequency coefficients, E Ax and E Bx are the region energy of coefficients A x and B x. 4.3 Aggregation of blocks A series of after-fused 3D arrays like F SxR which share the analogous structure with A SxR can be obtained by the inverse 3D transform. By restore the image blocks in these arrays to a 2D surface, the fused image F can be got. In general, there will be overlap of pixels between blocks. Here, an averaging operation is adopted to aggregate overlapped blocks together to form the 2D image. Overlap is caused by block selection, i.e., the same area of image may be selected as part of some similar blocks for many times, while after the transform domain process there might be variance among some pixels. For example, F xm is an image block belonging to the array F SxM, who is located on x M, while in another array F SxN there also exists the same block F xm,however F SxN is obtained by the reference blocks at x N. To solve this, we calculate the mean value of the overlap pixels as the final value. For overlapping blocks, [23] has more in-depth explanation, roughly speaking, different arrays containing overlapping image blocks are statistically correlated, biased and each pixel included has different variance. In image de-noising, they use a weighted averaging method, where the weights are inversely proportional to the total sample variance to reduce the weights of noise [13]. While in image fusion, such a weighted method will easily lead to edge-smoothing. Thus, an averaging method is adopted. Therefore, for the coefficient of each image block s pixel values ω xr, it can be defined as 1, n xr 1 ω xr = n xr (16) 1, otherwise

10 424 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 where n xr is the number of retained non-zero coefficients in F SxR,sothefinal fused image F may be loosely expressed as F = ω xm F xm, x X (17) x M X x m S xm where x M is the coordinate of an unspecific reference block, x m denotes the position of a similar block included in the x M locating group. 5. Experimental results In order to provide an effective evaluation of the proposed framework, we carry out three groups of comparative experiments in this paper. Four different sets of 256 gray levels multi-focus natural images are employed in the experiments, and we comparatively analyze the proposed algorithm through the subjective visual effects and objective evaluation criteria. In subsequent experiments, the size of the image blocks is pixels, and the step of the sliding window is 8 pixels. For DWT, we use a three levels db2 wavelet. The decomposition level of NSCT is 4, and there are respectively 2, 8, 16, 16 directional sub bands in each level, for non-subsampled filter banks, we use 9-7 as a pyramid filter and pkva as a direction filter. For NSST, we use three levels multi-scale decomposition, the numbers of directional sub bands of each level are 10, 10 and 18, and the pyramid filter is maxflat. For the NSCT in BMNSCT and NSST in BMNSST, the decompositions are both two levels. In addition, all experiments are implemented on an Intel Core i GHz with 4GB RAM. The simulation software is MATLAB 2014a. 5.1 Evaluation criteria The experiments use image entropy (EN) [7], average gradient (AVG) [24], normalized mutual information (MI) [25], edge based similarity measure (Q AB/F ) [26], structural similarity (SSIM) [27] and standard deviation (STD) [28], as the evaluation criteria. EN represents the richness of information. The larger the value of entropy is, the more information the image includes. AVG is an indicator of contrast. The larger the AVG is, the more gradation the image reveals. MI calculates how much information in source images is transferred to the fusion result. Higher MI means the fused image contains more information about the source images. Q AB/F using the Sobel operator gives the similarity between the edges transferred during the fusion process. A higher Q AB/F value indicates that more edge information is obtained. SSIM measures the structural similarity between fused and source images. An SSIM value which is closer to 1 means a better fusion. STD indicates the distribution of pixels. The larger the STD is, the more discretely the pixel values distribute, the more information the image contains. 5.2 Experiment with DCT The first experiment is the comparison between the DCT andthebmdct,anditisusedtoreflect the advantages of the proposed framework in transform domain fusion. The experiment uses DCT as the control group, and BMDCT as the experimental group. The results are shown in Fig. 9. As can be seen from Fig. 9, especially from the enlarged view that BMDCT has obvious advantages when compared with DCT. The proposed framework significantly weakens the artificial texture appearing on DCT fusion, thus making the BMDCT result become smoother, flatter and more natural. Fig. 9 Source images and fusion results of DCT and BMDCT

11 YANG Dongsheng et al.: Multi-focus image fusion based on block matching in 3D transform domain Experiments with different transforms This experiment evaluates the proposed framework together with various transforms. Four improved algorithms are used here: the BMDCT, the BMDWT, the BMNSCT and the BMNSST, and the fusion rules are CM and Averaging. The pair of source images are shown in Fig. 10(a) and Fig. 10(b). As can be seen in Fig. 11, Fig. 11(a) is the result of BMDCT; Fig. 11(b) is the result of BMDWT; Fig. 11(c) is the result of BMNSCT; Fig. 11(d) is the result of BMNSST; Fig. 11(e), Fig. 11(i) are Fig. 11(a) s difference map with Fig. 10(a) and Fig. 10(b); within the next three columns occurs the same. Fig. 10 Source images of experiment Fig. 11 Fusion results of different transforms on source image Lab For subjective visual effects, among the four transforms with block matching and 3D transform, the BMNSST approach is the best one, because of the clearer edges, more abundant textures and better retention details. The difference map shows that BMNSST has not only the best integration of the focus area, but the least artificial textures and blocking effects, which is followed by BMNSCT. In addition, we also examine the transforms from objective criteria. As is shown in Table 1, the fusion method of BMNSST has the best scores on EN, STD, MI, Q AB/F and the second best one on AVG. Therefore, the fused image of BMNSST has the maximum amount information of the source image. Table 1 Objective criteria comparison of different fusion algorithms with different transforms on source image Lab Fusion Objective criteria method EN STD AVG MI Q AB/F SSIM BMDCT BMDWT BMNSCT BMNSST Comparing with classic methods This experiment uses two groups of images to compare the results of some classic fusion algorithms and the BMDWT, BMNSCT, BMNSST algorithms using the improved fu-

12 426 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 sion rules, i.e., CMIS and region energy (RE). Therefore, the experiment is the horizontal comparison of the best combination in this paper (e.g., BMNSST-CMIS) and some existing transform domain fusion methods (e.g., DWT-MAX, NSCT-MAX and NSST-MAX). The first pair of source images Pepsi is shown in Fig. 10(c) and Fig. 10(d) and the second pair Clock is shown in Fig. 10(e) and Fig. 10(f). The experimental results of Pepsi can be seen in Fig. 12. Fig. 12(a) is the result of DWT-MAX; Fig. 12(b), Fig. 12(c), Fig. 12(d), Fig. 12(e), Fig. 12(f) are the results of NSCT-MAX, NSST- MAX, BMDWT-CMIS, BMNSCT-CMIS and BMNSST- CMIS respectively; Fig. 12(g), Fig. 12(m) are Fig. 12(a) s difference map with the sources images; within the next five columns occurs the same. Correspondingly, the results of Clock can be seen in Fig. 13. Fig. 12 Fusion effects of each transform domain method on source image Pepsi Fig. 13 Fusion effects of each transform domain method on source image Clock

13 YANG Dongsheng et al.: Multi-focus image fusion based on block matching in 3D transform domain 427 As can be seen from the subjective visual effects, compared with the existing algorithms, the proposed algorithm (i.e., BMNSST-CMIS) has a better performance on edge details. Some salient features on its result are clearer than others. From the comparison of the difference maps, we see that the fused image of the proposed algorithm is more similar with the source images. That is, the proposed method better restores the focus area of the source images. Besides, the performance of BMNSCT and BMDWT is also better than the results of their original transforms. In terms of objective criteria, the result of Pepsi and Clock can be seen from Table 2 and Table 3 respectively. Compared with the existing algorithms, the proposed algorithm has relatively good performance on four of six evaluation indexes. Especially on EN and MI, compared with NSST-MAX and NSCT-MAX, BMNSCT- CMIS and BMNSST-CMIS are improved significantly, and this shows that their results are more similar with both input images on the edges structure. Table 2 Objective criteria comparison of different transform domain fusion algorithms on source image Pepsi Fusion Objective criteria method EN STD AVG MI Q AB/F SSIM DWT-MAX NSCT-MAX NSST-MAX BMDWT-CMIS BMNSCT-CMIS BMNSST-CMIS Table 3 Objective criteria comparison of different transform domain fusion algorithms on source image Clock Fusion Objective criteria method EN STD AVG MI Q AB/F SSIM DWT-MAX NSCT-MAX NSST-MAX BMDWT-CMIS BMNSCT-CMIS BMNSST-CMIS Conclusions In this paper, a multi-focus image fusion framework based on block-matching and 3D transform is proposed. Compared with existing ones, by using blocking and grouping, the proposed method makes it possible to further utilize spatial domain correlation in the transform domain fusion. The algorithm forms similarly block into 3D arrays by using block-matching steps. Then use a 3D transform which consists of a 2D and a 1D transform to transfer the blocks into transform coefficients and process them by fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. Experimental results show that the proposed algorithm outperforms traditional algorithms in terms of qualitative and quantitative evaluations. Despite of many blocking and matching works, the efficiency of the algorithm is yet to be improved, therefore, how to reduce the time complexity will be the main research directions in the future. Besides, fusion rules are not discussed in depth in this paper, which also require further studies. References [1] HAGHIGHAT M B A, AGHAGOLZADEH A, SEYEDARABI H. Multi-focus image fusion for visual sensor networks in DCT domain. Computers & Electrical Engineering, 2011, 37(5): [2] ZHANG Z, BLUM R S. A categorization of multiscaledecomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, 1999, 87(2): [3] PAJARES G, CRUZ J M D L. A wavelet-based image fusion tutorial. Pattern Recognition, 2004, 37(9): [4] CANDES E J. Ridgelets: theory and applications. Stanford, USA: Stanford University, [5] COHEN R A, SCHUMAKAR L L. Curves and surfaces. Nashville: Vanderbilt University Press, [6] DO M N, VETTERLI M. The Contourlet transform: an efficient directional multi-resolution image representation. IEEE Trans. on Image Processing, 2005, 14(12): [7] ZHANG Q, GUO B L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing, 2009, 89(7): [8] WANG J, PENG J Y, FENG X Y, et al. Image fusion with nonsubsampled contourlet transform and sparse representation. Journal of Electronic Imaging, 2013, 22(4): [9] GUO K, LABATE D. Optimally sparse multidimensional representation using shearlets. SIAM Journal on Mathematical Analysis, 2007, 39(1): [10] NUNEZ J, OTAZU X, FORS O, et al. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. on Geoscience and Remote Sensing, 2002, 37(3): [11] GENG P, WANG Z Y, ZHANG Z G, et al. Image fusion by pulse couple neural network with shearlet. Optical Engineering, 2010, 51(6): [12] DABOV K, FOI A, KATKOVNIK V, et al. Image denoising with block-matching and 3D filtering. Proc. of SPIE-IS&T Electronic Imaging: Algorithms and Systems V, 2006, 6064: [13] DABOV K, FOI A, KATKOVNIK V, et al. Image denosing by sparse 3D transform-domain collaborative filtering. IEEE Trans. on Image Processing, 2007, 16(8): [14] TIAN J, CHEN J, ZHANG C. Multispectral image fusion based on fractal features. Proceedings of SPIE, 2004, 5308: [15] BHATNAGAR G, WU Q M J, LIU Z. Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans. on Multimedia, 2013, 15(5): [16] KUMAR M, DASS S. A total variation-based algorithm for pixel-level image fusion. IEEE Trans. on Image Processing, 2009, 18(9):

14 428 Journal of Systems Engineering and Electronics Vol. 29, No. 2, April 2018 [17] BUADES A, COLL B, MOREL J M. A review of image denoising algorithms, with a new one. SIAM Journal on Multiscale Modeling and Simulation, 2005, 4(2): [18] DO M N, VETTERLI M. Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance. IEEE Trans. on Image Processing, 2002, 11(2): [19] MACQUEEN J B. Some methods for classification and analysis of multivariate observations. Proc. of the 5th Berkeley Symposium on Mathematical Statistics and Probability, 1967: [20] HÖPPNER F, KLAWONN F, KRUSE R, et al. Fuzzy cluster analysis. Chichester: Wiley, [21] GERSHO A. On the structure of vector quantizers. IEEE Trans. on Information Theory, 1982, 28(2): [22] JIANG P, ZHANG Q, LI J, et al. Fusion algorithm for infrared and visible image based on NSST and adaptive PCNN. Laser and Infrared, 2014, 44(1): (in Chinese) [23] GULERYUZ O. Weighted overcomplete denoising. Proc. of the 7th Asilomar Conference on Signals, Systems and Computers, 2003, 2: [24] LIU S, ZHU Z, LI H, et al. Multi-focus image fusion using self-similarity and depth information in nonsubsampled shearlet transform domain. International Journal of Signal Processing, Image Processing and Pattern Recognition, 2016, 9(1): [25] QU G H, ZHANG D L, YAN P F. Information measure for performance of image fusion. Electronic Letters, 2002, 38(7): [26] XYDEAS C S, PETROVI V. Objective image fusion performance measure. Electronics Letters, 2000, 36(4): [27] WANG Z, BOVIK A C, SHEIK H R, et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Processing, 2004, 13(4): [28] MIAO Q G, SHI C, XU P F, et al. Multi-focus image fusion algorithm based on shearlets. Chinese Optics Letters, 2011, 9(4): Biographies YANG Dongsheng was born in He received his B.S. degree in computer science at the School of Computer and Information Technology, Beijing Jiaotong University in At present, he is pursuing his M.S. degree in information and signal processing at the Institute of Information Science, Beijing Jiaotong University. Currently, his research interests include image fusion, and image denoising. dsyang@bjtu.edu.cn HU Shaohai was born in He received his B.S. and M.S. degrees in the Department of Electronic Engineering from Beihang University in 1985 and 1988, respectively. He received his Ph.D. degree in Institute of Information Science from Beijing Jiaotong University in 1991 and has been a professor of Institute of Information Science since He has co-authored more than 100 journal articles and conference proceedings, and has published three books in his research area. His research interests lie in the broad area of signal processing and information fusion, including image fusion, image denosing and sparse representation. shhu@bjtu.edu.cn LIU Shuaiqi was born in He received his B.S. degree in the Department of Information and Computer Science, Shandong University of Science and Technology in He received his Ph.D. degree in Institute of Information Science from Beijing Jiaotong University in 2014 and has been a teacher of Hebei University since His research interests include image processing and human-computer. shdkj-1918@163.com MA Xiaole was born in She received his B.S. degree in communication engineering at the College of Electronic and Information Engineering, Hebei University. At present, she is pursuing her Ph.D. degree in information and signal processing at the Institute of Information Science, Beijing Jiaotong University. Currently, her research interests include image fusion, and image denoising. maxiaole@bjtu.edu.cn SUN Yuchao was born in He received his B.S. degree and M.S. degree from the North University of China in 2013 and from the Institute of Information Science, Beijing Jiaotong University in 2016, respectively. Currently, he is a researcher at the Third Research Institute of China Electronics Technology Group Corporation. His work focuses on signal processing @bjtu.edu.cn

Structure-adaptive Image Denoising with 3D Collaborative Filtering

Structure-adaptive Image Denoising with 3D Collaborative Filtering , pp.42-47 http://dx.doi.org/10.14257/astl.2015.80.09 Structure-adaptive Image Denoising with 3D Collaborative Filtering Xuemei Wang 1, Dengyin Zhang 2, Min Zhu 2,3, Yingtian Ji 2, Jin Wang 4 1 College

More information

Image Interpolation using Collaborative Filtering

Image Interpolation using Collaborative Filtering Image Interpolation using Collaborative Filtering 1,2 Qiang Guo, 1,2,3 Caiming Zhang *1 School of Computer Science and Technology, Shandong Economic University, Jinan, 250014, China, qguo2010@gmail.com

More information

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means

An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Transform Based on Local Energy Means An Effective Multi-Focus Medical Image Fusion Using Dual Tree Compactly Supported Shear-let Based on Local Energy Means K. L. Naga Kishore 1, N. Nagaraju 2, A.V. Vinod Kumar 3 1Dept. of. ECE, Vardhaman

More information

IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN

IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN IMAGE SUPER RESOLUTION USING NON SUB-SAMPLE CONTOURLET TRANSFORM WITH LOCAL TERNARY PATTERN Pikin S. Patel 1, Parul V. Pithadia 2, Manoj parmar 3 PG. Student, EC Dept., Dr. S & S S Ghandhy Govt. Engg.

More information

Fusion of Infrared and Visible Light Images Based on Region Segmentation

Fusion of Infrared and Visible Light Images Based on Region Segmentation Chinese Journal of Aeronautics 22(2009) 75-80 Chinese Journal of Aeronautics www.elsevier.com/locate/ca Fusion of Infrared and Visible Light Images Based on Region Segmentation Liu Kun a, *, Guo Lei a,

More information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Liu Chun College of Computer Science and Information Technology Daqing Normal University Daqing, China Sun Bishen Twenty-seventh

More information

A Novel Image Classification Model Based on Contourlet Transform and Dynamic Fuzzy Graph Cuts

A Novel Image Classification Model Based on Contourlet Transform and Dynamic Fuzzy Graph Cuts Appl. Math. Inf. Sci. 6 No. 1S pp. 93S-97S (2012) Applied Mathematics & Information Sciences An International Journal @ 2012 NSP Natural Sciences Publishing Cor. A Novel Image Classification Model Based

More information

An Effective Denoising Method for Images Contaminated with Mixed Noise Based on Adaptive Median Filtering and Wavelet Threshold Denoising

An Effective Denoising Method for Images Contaminated with Mixed Noise Based on Adaptive Median Filtering and Wavelet Threshold Denoising J Inf Process Syst, Vol.14, No.2, pp.539~551, April 2018 https://doi.org/10.3745/jips.02.0083 ISSN 1976-913X (Print) ISSN 2092-805X (Electronic) An Effective Denoising Method for Images Contaminated with

More information

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,

More information

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Syed Gilani Pasha Assistant Professor, Dept. of ECE, School of Engineering, Central University of Karnataka, Gulbarga,

More information

Image Quality Assessment based on Improved Structural SIMilarity

Image Quality Assessment based on Improved Structural SIMilarity Image Quality Assessment based on Improved Structural SIMilarity Jinjian Wu 1, Fei Qi 2, and Guangming Shi 3 School of Electronic Engineering, Xidian University, Xi an, Shaanxi, 710071, P.R. China 1 jinjian.wu@mail.xidian.edu.cn

More information

Survey on Multi-Focus Image Fusion Algorithms

Survey on Multi-Focus Image Fusion Algorithms Proceedings of 2014 RAECS UIET Panjab University Chandigarh, 06 08 March, 2014 Survey on Multi-Focus Image Fusion Algorithms Rishu Garg University Inst of Engg & Tech. Panjab University Chandigarh, India

More information

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions International Journal of Electrical and Electronic Science 206; 3(4): 9-25 http://www.aascit.org/journal/ijees ISSN: 2375-2998 Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

More information

Surface Defect Edge Detection Based on Contourlet Transformation

Surface Defect Edge Detection Based on Contourlet Transformation 2016 3 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-383-0 Surface Defect Edge Detection Based on Contourlet Transformation Changle Li, Gangfeng Liu*,

More information

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the The CSI Journal on Computer Science and Engineering Vol. 11, No. 2 & 4 (b), 2013 Pages 55-63 Regular Paper Multi-Focus Image Fusion for Visual Sensor Networks in Domain Wavelet Mehdi Nooshyar Mohammad

More information

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA) Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA) Samet Aymaz 1, Cemal Köse 1 1 Department of Computer Engineering, Karadeniz Technical University,

More information

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING Jianzhou Feng Li Song Xiaog Huo Xiaokang Yang Wenjun Zhang Shanghai Digital Media Processing Transmission Key Lab, Shanghai Jiaotong University

More information

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology

More information

Improved Multi-Focus Image Fusion

Improved Multi-Focus Image Fusion 18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 Improved Multi-Focus Image Fusion Amina Jameel Department of Computer Engineering Bahria University Islamabad Email:

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

MULTIVIEW 3D VIDEO DENOISING IN SLIDING 3D DCT DOMAIN

MULTIVIEW 3D VIDEO DENOISING IN SLIDING 3D DCT DOMAIN 20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27-31, 2012 MULTIVIEW 3D VIDEO DENOISING IN SLIDING 3D DCT DOMAIN 1 Michal Joachimiak, 2 Dmytro Rusanovskyy 1 Dept.

More information

Image Fusion Using Double Density Discrete Wavelet Transform

Image Fusion Using Double Density Discrete Wavelet Transform 6 Image Fusion Using Double Density Discrete Wavelet Transform 1 Jyoti Pujar 2 R R Itkarkar 1,2 Dept. of Electronics& Telecommunication Rajarshi Shahu College of Engineeing, Pune-33 Abstract - Image fusion

More information

A reversible data hiding based on adaptive prediction technique and histogram shifting

A reversible data hiding based on adaptive prediction technique and histogram shifting A reversible data hiding based on adaptive prediction technique and histogram shifting Rui Liu, Rongrong Ni, Yao Zhao Institute of Information Science Beijing Jiaotong University E-mail: rrni@bjtu.edu.cn

More information

DESIGN OF A NOVEL IMAGE FUSION ALGORITHM FOR IMPULSE NOISE REMOVAL IN REMOTE SENSING IMAGES BY USING THE QUALITY ASSESSMENT

DESIGN OF A NOVEL IMAGE FUSION ALGORITHM FOR IMPULSE NOISE REMOVAL IN REMOTE SENSING IMAGES BY USING THE QUALITY ASSESSMENT DESIGN OF A NOVEL IMAGE FUSION ALGORITHM FOR IMPULSE NOISE REMOVAL IN REMOTE SENSING IMAGES BY USING THE QUALITY ASSESSMENT P.PAVANI, M.V.H.BHASKARA MURTHY Department of Electronics and Communication Engineering,Aditya

More information

Image Denoising based on Adaptive BM3D and Singular Value

Image Denoising based on Adaptive BM3D and Singular Value Image Denoising based on Adaptive BM3D and Singular Value Decomposition YouSai hang, ShuJin hu, YuanJiang Li Institute of Electronic and Information, Jiangsu University of Science and Technology, henjiang,

More information

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach

Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach Multi-Focus Medical Image Fusion using Tetrolet Transform based on Global Thresholding Approach K.L. Naga Kishore 1, G. Prathibha 2 1 PG Student, Department of ECE, Acharya Nagarjuna University, College

More information

Content Based Image Retrieval Using Curvelet Transform

Content Based Image Retrieval Using Curvelet Transform Content Based Image Retrieval Using Curvelet Transform Ishrat Jahan Sumana, Md. Monirul Islam, Dengsheng Zhang and Guojun Lu Gippsland School of Information Technology, Monash University Churchill, Victoria

More information

Bridge Surface Crack Detection Method

Bridge Surface Crack Detection Method , pp.337-343 http://dx.doi.org/10.14257/astl.2016. Bridge Surface Crack Detection Method Tingping Zhang 1,2, Jianxi Yang 1, Xinyu Liang 3 1 School of Information Science & Engineering, Chongqing Jiaotong

More information

A Novel NSCT Based Medical Image Fusion Technique

A Novel NSCT Based Medical Image Fusion Technique International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 3 Issue 5ǁ May 2014 ǁ PP.73-79 A Novel NSCT Based Medical Image Fusion Technique P. Ambika

More information

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON WITH S.Shanmugaprabha PG Scholar, Dept of Computer Science & Engineering VMKV Engineering College, Salem India N.Malmurugan Director Sri Ranganathar Institute

More information

IMAGE ENHANCEMENT USING NONSUBSAMPLED CONTOURLET TRANSFORM

IMAGE ENHANCEMENT USING NONSUBSAMPLED CONTOURLET TRANSFORM IMAGE ENHANCEMENT USING NONSUBSAMPLED CONTOURLET TRANSFORM Rafia Mumtaz 1, Raja Iqbal 2 and Dr.Shoab A.Khan 3 1,2 MCS, National Unioversity of Sciences and Technology, Rawalpindi, Pakistan: 3 EME, National

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya

More information

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.

More information

Research on Design and Application of Computer Database Quality Evaluation Model

Research on Design and Application of Computer Database Quality Evaluation Model Research on Design and Application of Computer Database Quality Evaluation Model Abstract Hong Li, Hui Ge Shihezi Radio and TV University, Shihezi 832000, China Computer data quality evaluation is the

More information

Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei

Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei Image Denoising Based on Hybrid Fourier and Neighborhood Wavelet Coefficients Jun Cheng, Songli Lei College of Physical and Information Science, Hunan Normal University, Changsha, China Hunan Art Professional

More information

Multi Focus Image Fusion Using Joint Sparse Representation

Multi Focus Image Fusion Using Joint Sparse Representation Multi Focus Image Fusion Using Joint Sparse Representation Prabhavathi.P 1 Department of Information Technology, PG Student, K.S.R College of Engineering, Tiruchengode, Tamilnadu, India 1 ABSTRACT: The

More information

Development of Video Fusion Algorithm at Frame Level for Removal of Impulse Noise

Development of Video Fusion Algorithm at Frame Level for Removal of Impulse Noise IOSR Journal of Engineering (IOSRJEN) e-issn: 50-301, p-issn: 78-8719, Volume, Issue 10 (October 01), PP 17- Development of Video Fusion Algorithm at Frame Level for Removal of Impulse Noise 1 P.Nalini,

More information

High Capacity Reversible Watermarking Scheme for 2D Vector Maps

High Capacity Reversible Watermarking Scheme for 2D Vector Maps Scheme for 2D Vector Maps 1 Information Management Department, China National Petroleum Corporation, Beijing, 100007, China E-mail: jxw@petrochina.com.cn Mei Feng Research Institute of Petroleum Exploration

More information

A Miniature-Based Image Retrieval System

A Miniature-Based Image Retrieval System A Miniature-Based Image Retrieval System Md. Saiful Islam 1 and Md. Haider Ali 2 Institute of Information Technology 1, Dept. of Computer Science and Engineering 2, University of Dhaka 1, 2, Dhaka-1000,

More information

Multi-focus image fusion using de-noising and sharpness criterion

Multi-focus image fusion using de-noising and sharpness criterion Multi-focus image fusion using de-noising and sharpness criterion Sukhdip Kaur, M.Tech (research student) Department of Computer Science Guru Nanak Dev Engg. College Ludhiana, Punjab, INDIA deep.sept23@gmail.com

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

2. LITERATURE REVIEW

2. LITERATURE REVIEW 2. LITERATURE REVIEW CBIR has come long way before 1990 and very little papers have been published at that time, however the number of papers published since 1997 is increasing. There are many CBIR algorithms

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

NSCT domain image fusion, denoising & K-means clustering for SAR image change detection

NSCT domain image fusion, denoising & K-means clustering for SAR image change detection NSCT domain image fusion, denoising & K-means clustering for SAR image change detection Yamuna J. 1, Arathy C. Haran 2 1,2, Department of Electronics and Communications Engineering, 1 P. G. student, 2

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING Divesh Kumar 1 and Dheeraj Kalra 2 1 Department of Electronics & Communication Engineering, IET, GLA University, Mathura 2 Department

More information

Image denoising in the wavelet domain using Improved Neigh-shrink

Image denoising in the wavelet domain using Improved Neigh-shrink Image denoising in the wavelet domain using Improved Neigh-shrink Rahim Kamran 1, Mehdi Nasri, Hossein Nezamabadi-pour 3, Saeid Saryazdi 4 1 Rahimkamran008@gmail.com nasri_me@yahoo.com 3 nezam@uk.ac.ir

More information

Anisotropic representations for superresolution of hyperspectral data

Anisotropic representations for superresolution of hyperspectral data Anisotropic representations for superresolution of hyperspectral data Edward H. Bosch, Wojciech Czaja, James M. Murphy, and Daniel Weinberg Norbert Wiener Center Department of Mathematics University of

More information

Latest development in image feature representation and extraction

Latest development in image feature representation and extraction International Journal of Advanced Research and Development ISSN: 2455-4030, Impact Factor: RJIF 5.24 www.advancedjournal.com Volume 2; Issue 1; January 2017; Page No. 05-09 Latest development in image

More information

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform

Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform 0 International Conference on Image Information Processing (ICIIP 0) Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform Sudeb Das and Malay Kumar

More information

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS ARIFA SULTANA 1 & KANDARPA KUMAR SARMA 2 1,2 Department of Electronics and Communication Engineering, Gauhati

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Region Based Image Fusion Using SVM

Region Based Image Fusion Using SVM Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel

More information

Study of Single Image Dehazing Algorithms Based on Shearlet Transform

Study of Single Image Dehazing Algorithms Based on Shearlet Transform Applied Mathematical Sciences, Vol. 8, 2014, no. 100, 4985-4994 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.46478 Study of Single Image Dehazing Algorithms Based on Shearlet Transform

More information

Forest Fire Smoke Recognition Based on Gray Bit Plane Technology

Forest Fire Smoke Recognition Based on Gray Bit Plane Technology Vol.77 (UESST 20), pp.37- http://dx.doi.org/0.257/astl.20.77.08 Forest Fire Smoke Recognition Based on Gray Bit Plane Technology Xiaofang Sun, Liping Sun 2,, Yaqiu Liu 3, Yinglai Huang Office of teaching

More information

A New Technique of Extraction of Edge Detection Using Digital Image Processing

A New Technique of Extraction of Edge Detection Using Digital Image Processing International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) A New Technique of Extraction of Edge Detection Using Digital Image Processing Balaji S.C.K 1 1, Asst Professor S.V.I.T Abstract:

More information

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images Karthik Ram K.V & Mahantesh K Department of Electronics and Communication Engineering, SJB Institute of Technology, Bangalore,

More information

Structural Similarity Optimized Wiener Filter: A Way to Fight Image Noise

Structural Similarity Optimized Wiener Filter: A Way to Fight Image Noise Structural Similarity Optimized Wiener Filter: A Way to Fight Image Noise Mahmud Hasan and Mahmoud R. El-Sakka (B) Department of Computer Science, University of Western Ontario, London, ON, Canada {mhasan62,melsakka}@uwo.ca

More information

Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions

Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions Tong et al. EURASIP Journal on Wireless Communications and Networking 2014, 2014:150 RESEARCH Open Access Compressive sensing image-fusion algorithm in wireless sensor networks based on blended basis functions

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks

Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks Real-Time Fusion of Multi-Focus Images for Visual Sensor Networks Mohammad Bagher Akbari Haghighat, Ali Aghagolzadeh, and Hadi Seyedarabi Faculty of Electrical and Computer Engineering, University of Tabriz,

More information

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature ITM Web of Conferences, 0500 (07) DOI: 0.05/ itmconf/070500 IST07 Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature Hui YUAN,a, Ying-Guang HAO and Jun-Min LIU Dalian

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Image Fusion Based on Wavelet and Curvelet Transform

Image Fusion Based on Wavelet and Curvelet Transform Volume-1, Issue-1, July September, 2013, pp. 19-25 IASTER 2013 www.iaster.com, ISSN Online: 2347-4904, Print: 2347-8292 Image Fusion Based on Wavelet and Curvelet Transform S. Sivakumar #, A. Kanagasabapathy

More information

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3 e-issn 2277-2685, p-issn 2320-976 IJESR/July 2014/ Vol-4/Issue-7/525-532 U. Sudheer Kumar et. al./ International Journal of Engineering & Science Research ABSTRACT ENHANCED IMAGE FUSION ALGORITHM USING

More information

Medical Image Fusion Using Discrete Wavelet Transform

Medical Image Fusion Using Discrete Wavelet Transform RESEARCH ARTICLE OPEN ACCESS Medical Fusion Using Discrete Wavelet Transform Nayera Nahvi, Deep Mittal Department of Electronics & Communication, PTU, Jalandhar HOD, Department of Electronics & Communication,

More information

IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE

IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE IMAGE FUSION PARAMETER ESTIMATION AND COMPARISON BETWEEN SVD AND DWT TECHNIQUE Gagandeep Kour, Sharad P. Singh M. Tech Student, Department of EEE, Arni University, Kathgarh, Himachal Pardesh, India-7640

More information

Denoising an Image by Denoising its Components in a Moving Frame

Denoising an Image by Denoising its Components in a Moving Frame Denoising an Image by Denoising its Components in a Moving Frame Gabriela Ghimpețeanu 1, Thomas Batard 1, Marcelo Bertalmío 1, and Stacey Levine 2 1 Universitat Pompeu Fabra, Spain 2 Duquesne University,

More information

Evaluation of texture features for image segmentation

Evaluation of texture features for image segmentation RIT Scholar Works Articles 9-14-2001 Evaluation of texture features for image segmentation Navid Serrano Jiebo Luo Andreas Savakis Follow this and additional works at: http://scholarworks.rit.edu/article

More information

A COMPARISON OF WAVELET-BASED AND RIDGELET- BASED TEXTURE CLASSIFICATION OF TISSUES IN COMPUTED TOMOGRAPHY

A COMPARISON OF WAVELET-BASED AND RIDGELET- BASED TEXTURE CLASSIFICATION OF TISSUES IN COMPUTED TOMOGRAPHY A COMPARISON OF WAVELET-BASED AND RIDGELET- BASED TEXTURE CLASSIFICATION OF TISSUES IN COMPUTED TOMOGRAPHY Lindsay Semler Lucia Dettori Intelligent Multimedia Processing Laboratory School of Computer Scienve,

More information

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DS7201 ADVANCED DIGITAL IMAGE PROCESSING II M.E (C.S) QUESTION BANK UNIT I 1. Write the differences between photopic and scotopic vision? 2. What

More information

A Real-time Detection for Traffic Surveillance Video Shaking

A Real-time Detection for Traffic Surveillance Video Shaking International Conference on Mechatronics, Control and Electronic Engineering (MCE 201) A Real-time Detection for Traffic Surveillance Video Shaking Yaoyao Niu Zhenkuan Pan e-mail: 11629830@163.com e-mail:

More information

A Quantitative Approach for Textural Image Segmentation with Median Filter

A Quantitative Approach for Textural Image Segmentation with Median Filter International Journal of Advancements in Research & Technology, Volume 2, Issue 4, April-2013 1 179 A Quantitative Approach for Textural Image Segmentation with Median Filter Dr. D. Pugazhenthi 1, Priya

More information

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION WILLIAM ROBSON SCHWARTZ University of Maryland, Department of Computer Science College Park, MD, USA, 20742-327, schwartz@cs.umd.edu RICARDO

More information

Research on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information.

Research on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information. , pp. 65-74 http://dx.doi.org/0.457/ijsip.04.7.6.06 esearch on the Wood Cell Contour Extraction Method Based on Image Texture and Gray-scale Information Zhao Lei, Wang Jianhua and Li Xiaofeng 3 Heilongjiang

More information

I. INTRODUCTION. Figure-1 Basic block of text analysis

I. INTRODUCTION. Figure-1 Basic block of text analysis ISSN: 2349-7637 (Online) (RHIMRJ) Research Paper Available online at: www.rhimrj.com Detection and Localization of Texts from Natural Scene Images: A Hybrid Approach Priyanka Muchhadiya Post Graduate Fellow,

More information

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

Image Denoising Methods Based on Wavelet Transform and Threshold Functions

Image Denoising Methods Based on Wavelet Transform and Threshold Functions Image Denoising Methods Based on Wavelet Transform and Threshold Functions Liangang Feng, Lin Lin Weihai Vocational College China liangangfeng@163.com liangangfeng@163.com ABSTRACT: There are many unavoidable

More information

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING Engineering Review Vol. 32, Issue 2, 64-69, 2012. 64 VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING David BARTOVČAK Miroslav VRANKIĆ Abstract: This paper proposes a video denoising algorithm based

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 1 (2017) pp. 141-150 Research India Publications http://www.ripublication.com Change Detection in Remotely Sensed

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

An Efficient Multi-Focus Image Fusion Scheme Based On PCNN

An Efficient Multi-Focus Image Fusion Scheme Based On PCNN An Efficient Multi-Focus Fusion Scheme Based On PCNN Shruti D. Athawale 1, Sagar S. Badnerkar 2 1 ME Student, Department of Electronics & Telecommunication, GHRCEM, Amravati, India 2 Asst.Professor, Department

More information

Fingerprint Image Compression

Fingerprint Image Compression Fingerprint Image Compression Ms.Mansi Kambli 1*,Ms.Shalini Bhatia 2 * Student 1*, Professor 2 * Thadomal Shahani Engineering College * 1,2 Abstract Modified Set Partitioning in Hierarchical Tree with

More information

A A A. Fig.1 image patch. Then the edge gradient magnitude is . (1)

A A A. Fig.1 image patch. Then the edge gradient magnitude is . (1) International Conference on Information Science and Computer Applications (ISCA 013) Two-Dimensional Barcode Image Super-Resolution Reconstruction Via Sparse Representation Gaosheng Yang 1,Ningzhong Liu

More information

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform Thaarini.P 1, Thiyagarajan.J 2 PG Student, Department of EEE, K.S.R College of Engineering, Thiruchengode, Tamil Nadu, India

More information

Fabric Defect Detection Based on Computer Vision

Fabric Defect Detection Based on Computer Vision Fabric Defect Detection Based on Computer Vision Jing Sun and Zhiyu Zhou College of Information and Electronics, Zhejiang Sci-Tech University, Hangzhou, China {jings531,zhouzhiyu1993}@163.com Abstract.

More information

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

More information

Performance Evaluation of Fusion of Infrared and Visible Images

Performance Evaluation of Fusion of Infrared and Visible Images Performance Evaluation of Fusion of Infrared and Visible Images Suhas S, CISCO, Outer Ring Road, Marthalli, Bangalore-560087 Yashas M V, TEK SYSTEMS, Bannerghatta Road, NS Palya, Bangalore-560076 Dr. Rohini

More information

Image De-noising using Contoulets (A Comparative Study with Wavelets)

Image De-noising using Contoulets (A Comparative Study with Wavelets) Int. J. Advanced Networking and Applications 1210 Image De-noising using Contoulets (A Comparative Study with Wavelets) Abhay P. Singh Institute of Engineering and Technology, MIA, Alwar University of

More information

Image Restoration Using DNN

Image Restoration Using DNN Image Restoration Using DNN Hila Levi & Eran Amar Images were taken from: http://people.tuebingen.mpg.de/burger/neural_denoising/ Agenda Domain Expertise vs. End-to-End optimization Image Denoising and

More information

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information

More information

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image Processing

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

QR Code Watermarking Algorithm based on Wavelet Transform

QR Code Watermarking Algorithm based on Wavelet Transform 2013 13th International Symposium on Communications and Information Technologies (ISCIT) QR Code Watermarking Algorithm based on Wavelet Transform Jantana Panyavaraporn 1, Paramate Horkaew 2, Wannaree

More information

A Fast Caption Detection Method for Low Quality Video Images

A Fast Caption Detection Method for Low Quality Video Images 2012 10th IAPR International Workshop on Document Analysis Systems A Fast Caption Detection Method for Low Quality Video Images Tianyi Gui, Jun Sun, Satoshi Naoi Fujitsu Research & Development Center CO.,

More information

Curvelet Transform with Adaptive Tiling

Curvelet Transform with Adaptive Tiling Curvelet Transform with Adaptive Tiling Hasan Al-Marzouqi and Ghassan AlRegib School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, GA, 30332-0250 {almarzouqi, alregib}@gatech.edu

More information

Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion

Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion Institute of Oceanographic Instrumentation, Shandong Academy of Sciences Qingdao, 266061, China E-mail:gyygyy1234@163.com Zhigang

More information