Robust image watermarking using local invariant features

Size: px
Start display at page:

Download "Robust image watermarking using local invariant features"

Transcription

1 45 3, March 2006 Robust image watermarking using local invariant features Hae-Yeoun Lee Korea Advanced Institute of Science and Technology Department of Electrical Engineering and Computer Science Guseong-dong Yuseong-gu Daejeon, Republic of Korea Hyungshin Kim Chungnam National University Department of Computer Science and Engineering 220 Gung-dong Yuseong-gu Daejeon, Republic of Korea Heung-Kyu Lee Korea Advanced Institute of Science and Technology Department of Electrical Engineering and Computer Science Guseong-dong Yuseong-gu Daejeon, Republic of Korea Abstract. This paper addresses a novel robust watermarking method for digital images using local invariant features. Most previous watermarking algorithms are unable to resist geometric distortions that desynchronize the location where copyright information is inserted. We propose a watermarking method that is robust to geometric distortions. In order to resist geometric distortions, we use a local invariant feature of the image called the scale-invariant feature transform SIFT, which is invariant to translation and scaling distortions. The watermark is inserted into the circular patches generated by the SIFT. Rotation invariance is achieved using the translation property of the polar-mapped circular patches. Our method belongs to the blind watermark, because we do not need the original image during detection. We have performed an intensive simulation to show the robustness of the proposed method. The simulation results support the contention that our method is robust against geometric distortion attacks as well as signal-processing attacks. We have compared our results with those of other methods, and our method outperforms them Society of Photo-Optical Instrumentation Engineers. DOI: / Subject terms: robust watermarking; watermark synchronization; local invariant features. Paper R received Dec. 22, 2004; revised manuscript received Jul. 4, 2005; accepted for publication Jul. 21, 2005; published online Mar. 20, Introduction Following developments in computer technology that have made digital imaging techniques widely available, a variety of multimedia contents have been digitalized. Digital multimedia have many advantages over analog media: they can be easily accessed, copied, and distributed multiple times without degradation of quality. However, the widespread use of digital multimedia has also brought with it problems regarding the preservation of copyright. Digital watermarking is an efficient solution for copyright protection, which inserts copyright information, the watermark, into the contents themselves. Ownership of the contents can be established by retrieving the inserted watermark. Various attacks have been reported to be effective against watermarking methods. 1 Among them, geometric distortion is known as one of the most difficult attacks to resist. Geometric distortion desynchronizes the location of the watermark and hence causes incorrect watermark detection. In such cases, the watermark synchronization process is required to calculate the watermark location before watermark insertion and detection and is crucial for the robustness of the watermarking system. A significant amount of research related to watermark synchronization has been conducted. The use of a periodic sequence, 2 the use of templates, 3 and the use of an invariant /2006/$ SPIE transform 4,5 have been reported. The use of media contents is another solution for watermark synchronization, 6 and our method belongs to this approach. Media contents represent an invariant reference for geometric distortions so that referring to content can solve the problem of watermark synchronization, i.e., the location of the watermark is not related to image coordinates, but to image semantics. 7 In what follows, we refer to the location for watermark insertion and detection as the patch. Bas et al. 6 proposed a content-based synchronization method, in which they first extract salient feature points and then decompose the image into a set of disjoint triangles through Delaunay tessellation. The sets of triangles the patches are used to insert and detect the watermark in the spatial domain. The drawback of this method is that the extracted feature points from the original image and distorted images are not matched. Therefore, the sets of triangles generated during watermark insertion and detection are different, and the resulting patches do not match. Nikolaidis and Pitas 8 proposed an image-segmentationbased synchronization method. By applying an adaptive k-mean clustering technique, they segment images, select several largest regions, and fit those regions as ellipsoids. The bounding rectangles of the ellipsoids are used as the patches. The problem with this method is that the image segmentation depends on the image contents, so that image distortions severely affect the segmentation results. Tang and Hang 9 introduced a synchronization method

2 that uses intensity-based feature extraction and image normalization. They extract feature points by Mexican hat wavelet scale interaction, and the disks of fixed radius centered at each feature point are normalized. These normalized disks are used as the patches. However, the normalization is sensitive to the image contents used, so the robustness of these patches will decrease when the image is distorted. The selection of features is important for robust watermarking in content-based synchronization methods. We believe that local image characteristics are more useful than global ones. The scale-invariant feature transform SIFT extracts features by considering local image properties and is invariant to rotation, scaling, translation, and partial illumination changes. 10 In this paper, we propose a watermarking method, using the SIFT, that is robust to geometric distortions. Using the SIFT, we generate circular patches that are invariant to translation and scaling distortions. The watermark is inserted into the circular patches in an additive way in the spatial domain. Rotation invariance is achieved using the translation property of the polar-mapped circular patches. We have performed an intensive simulation to show the robustness of the proposed method with 100 test images. The simulation results confirm that our method is robust against geometric distortion attacks as well as signal-processing attacks. This paper is organized as follows. In Sec. 2, we explain the SIFT and how to extract circular patches using it. In Sec. 3, our watermarking method is described with the analysis of the detector performance. Simulation results are shown in Sec. 4. Section 5 concludes. 2 Local Invariant Features Affine-invariant local features in object recognition and image retrieval applications have recently been studied These local invariant features are highly distinctive and matched with a high probability against large image distortions. In content-based watermark synchronization, the robust extraction of patches is related to the robustness of watermarking systems, and the considen of local image characteristics will aid in the reliable extraction of patches. We propose a new synchronization method based on the SIFT. Fig. 1 Scale space by using difference-of-gaussians function and neighbors of a pixel. 2.1 The SIFT The SIFT was proposed by Lowe 10 and has proved to be invariant to image rotation, scaling, translation, partial illumination changes, and projective transformations. Considering local image characteristics, the SIFT descriptor extracts features and their properties, such as the location t 1,t 2, the scale s, and the orientation. The basic idea of the SIFT is to extract features through a staged filtering that identifies stable points in the scale space: 1 select candidates for features by searching for peaks in the scale space of the difference-of-gaussians DoG function, 2 localize each feature using measures of its stability, and 3 assign orientations based on local image gradient directions. In order to extract candidate locations for features, the scale space D x,y, is computed using a DoG function. As shown in Fig. 1, they successively smooth original images with a variable-scale 1, 2, and 3 Gaussian function and calculate the scale-space images by subtracting two successive smoothed images. The parameter is a variance called a scale of the Gaussian function. The scale of the scale-space images is determined by the nearby scale 1, 2,or 3 of the Gaussian-smoothed image. In these scale-space images, they retrieve all local maxima and minima by checking the closest eight neighbors in the same scale and nine neighbors in the scales above and below. These extrema determine the location t 1,t 2 and the scale s of the SIFT features, which are invariant to the scale and orientation change of images. In our experiment, to generate the scale-space images, we apply scales of the Gaussian function from 1.0 to 32.0 and increase the scale by multiplying by a constant factor 2. After candidate locations have been found, a detailed model is fitted by a 3-D quadratic function to determine accurately the location t 1,t 2 and scale s of each feature. In addition, candidate locations that have a low contrast or are poorly localized along edges are removed by measuring the stability of each feature using a 2-by-2 Hessian matrix H as follows: stability = D xx + D yy 2 2 D xx D yy D xy where H = D xx e +1 2, e D xy 1 D xy D yy. Here e is the of the largest to the smallest eigenvalue and is used to control stability. They use e=10. The quantities D xx, D xy, and D yy are the derivatives of the scalespace images. In order to achieve invariance to image rotation, they assign a consistent orientation to each feature. In the Gaussian-smoothed image with the scale of the extracted SIFT features, they calculate gradient orientations of all sample points within a circular window about a feature location and form an orientation histogram. The peak in this histogram corresponds to the dominant direction of that feature. 2.2 Modifications for Watermarking In this section, we describe how to formulate circular patches for watermark insertion and detection using this descriptor. The local features from the SIFT descriptor are not directly applicable to watermarking, because the number and distribution of the features are dependent on image

3 contents and textures. Moreover, the SIFT descriptor was originally devised for image-matching applications, so it extracts many features that have dense distribution over the whole image. Therefore, we adjust the number, distribution, and scale of the features and remove those features that are susceptible to watermark attacks. The SIFT descriptor extracts features with such properties as their location t 1,t 2, scale s, and orientation. In practice, the orientation property of the SIFT descriptor of the original image and distorted images do not match precisely. Hence, we make a circular patch by using only the location t 1,t 2 and scale s of extracted SIFT features, as follows: x t y t 2 2 = ks 2, 2 where k is a magnification factor to control the radius of the circular patches. The way in which this factor is determined is explained in the fourth paragraph of this subsection. These patches are invariant to image scaling and translation as well as spatial modifications. By applying a prefilter, such as a Gaussian filter, before feature extraction, we can reduce the interference of noise and increase the robustness of extracted circular patches. The scale of features derived from the SIFT descriptor is related to the scaling factor of the Gaussian function in the scale space. In our analysis, features whose scale is small have a low probability of being redetected, because they disappear easily when image contents are modified. Features whose scale is large also have a low probability of being redetected in distorted images, because they move easily to other locations. Moreover, using large-scale features means that patches will overlap each other, which will result in degradation of the perceptual quality of watermarked images. Therefore, we remove features whose scale is below a or above b. In our experiments, we set a and b at 2.0 and 10.0, respectively. The SIFT descriptor considers image contents so that extracted SIFT features have different properties in the scale s, depending on image contents. Watermark insertion and detection necessarily require interpolation to transform the rectangular watermark to match the shape of patches, and vice versa. In order to minimize the distortion of the watermark through interpolation, the size radius of the patches must be similar to, or larger than, the size of the watermark. The scale s of extracted SIFT features varies from 2.0 to Therefore, we divide the scale of features into two ranges and apply different magnification factors k 1 and k 2, which are determined empirically on the assumption that the size of the watermarked images will not be changed excessively. Although the features whose size is near to the boundary of the range may be susceptible to scaling attacks, there are a number of circular patches in an image, so that the effect of these features on the watermarking is small. The distribution of local features is related to the performance of watermarking systems. In other words, the distance between adjacent features must be determined carefully. If the distance is small, patches will overlap in large areas, and if the distance is large, the number of patches will not be sufficient for the effective insertion of the watermark. To control the distribution of extracted features, Fig. 2 Circular patch from our proposed method in a the original image, b the mean-filtered image, c the median-filtered image, d the additive uniform noise image, e the 10-deg-rotated image, and f the 1.2 scaled image. we apply a circular neighborhood constraint similar to that used by Bas et al., 6 in which the features whose strength is the largest are used to generate circular patches. The value from the DoG function is used to measure the strength of each feature. The distance D between adjacent features depends on the dimensions of the image and is quantized by the r value as follows: D = w + h. 3 r The width and height of the image are denoted by w and h, respectively. The r value is a constant to control the distance between adjacent features and is set at 16 and 32 in the insertion and detection processes, respectively. Figure 2 shows a circular patch from our proposed synchronization method in spatial filters, additive uniform noise, rotation, and scaling of the image. For convenience of identification, we represent only one patch and find that the patch is formulated robustly, even when the image is distorted. 3 Watermarking Scheme This section describes our watermarking scheme. We extract circular patches by the synchronization method described in Sec. 2. The 2-D watermark is generated and transformed into circular form for each patch. The circular watermarks are added to the patches. We first describe the watermark genen procedure and then explain watermark insertion and detection. 3.1 Watermark Genen We generate a 2-D rectangular watermark that follows a Gaussian distribution, using a random number generator. To be inserted into circular patches, this watermark should be transformed so that its shape is circular. We consider the rectangular watermark to be a polar-mapped watermark and inversely polar-map it to assign the insertion location of the circular patches. In this way, a rotation attack is mapped as

4 embossing effect. Through symmetrical mapping, we can make the size of the homocentric region small and thus render the watermark invisible. Moreover, we can increase the likelihood that the watermark will survive attacks such as cropping. Fig. 3 Polar mapping between the rectangular watermark and the circular watermark. a translation of the rectangular watermark, and the watermark still can be detected using the correlation detector. Note that the size of circular patches differs, so we should generate a separate circular watermark for each patch. Let the x and y dimensions of the rectangular watermark be denoted by M and N, respectively. Let r be the radius size of a circular patch. As shown in Fig. 3, we divide a circular patch into homocentric regions. To generate the circular watermark, the x- and the y-axis of the rectangular watermark are inversely polar-mapped into the radius and angle directions of the patch. The relation between the coordinates of the rectangular watermark and the circular watermark is represented as follows: x = r i r 0 r M r 0 M, y = N if 0, x = r i r 0 M, y = N if 2, 4 r M r 0 where x and y are the rectangular watermark coordinates, r i and are the coordinates of the circular watermark, r M is equal to the radius of the patch, and r 0 is a fixed fraction of r M. In our experiments, we set r 0 to r M /4. For effective transformation, r 0 should be larger than M /, and the difference between r M and r 0 should be larger than N. If these constraints are not satisfied, the rectangular watermark must be sampled. As a result, it is difficult to transform efficiently. To increase the robustness and invisibility of the inserted watermark, we transform the rectangular watermark to be mapped to only the upper half of the patch, i.e., the y-axis of the rectangular watermark is scaled by the angle of a half circle, not the angle of a full circle 2. The lower half of the patch is set symmetrically with respect to the upper half see Fig. 3. In aspect of the image, watermarks constitute a kind of noise. When noise of similar strength gathers together, we can perceive it. In our scheme, a pixel in the rectangular watermark is mapped to adjacent several pixels in the circular watermark during polar mapping. In other words, the same noise is inserted into the homocentric region of a circular patch. Therefore, if the size of the homocentric region is large, the inserted watermark is visible as an 3.2 Watermark Insertion The first step in watermark insertion is to analyze image contents to extract the patches. Then, the watermark is inserted repeatedly into all patches. Our watermark insertion process is shown in Fig. 4 A. Step a. To extract circular patches, we use the SIFT descriptor, as explained in Sec. 2. A single image may contain a number of patches. We insert the watermark into all patches to increase the robustness of our scheme. Step b.1. We generate a circular watermark dependent on the radius size of each patch, using the method described in Sec We have endeavored to construct the patches so that their radius is similar to, or larger than, the x and y sizes of the rectangular watermark; thus, during extraction of the patches, a pixel w in the rectangular watermark is mapped to several pixels wc in the circular watermark. This compensates for errors in alignment of the circular patches regarding location and scale during watermark detection. Step b.2. The insertion of the watermark must not affect the perceptual quality of images. This constraint has a bearing on the insertion strength of the watermark, inasmuch as it must be imperceptible to the human eye. We apply the perceptual mask as follows 13 : = 1 NVF + NVF, 5 where is the lower bound of visibility in flat and smooth regions and is the upper bound in edged and texture regions. The noise visibility function is calculated as follows: 1 NVF i, j = 1+ 2 x i, j, = D 2, 6 x max where 2 2 x ij and x max denote the local variance and maximum of neighboring pixels within five pixels, and D is a scaling constant. Step b.3. Finally, we insert this circular watermark additively into the spatial domain. The insertion of the watermark is represented as the spatial addition between the pixels of images and the pixels of the circular watermark as follows: v i = v i + i wc i, where wc i N 0,1. 7 Here v i and wc i denote the pixels of images and of the circular watermark, respectively, and denotes the perceptual mask that controls the insertion strength of the watermark. 3.3 Watermark Similarly to watermark insertion, the first step for watermark detection is analyzing the image contents to extract patches. The watermark is then detected from the patches. If the watermark is detected correctly from at least one

5 Fig. 4 Frameworks for watermark insertion and detection. patch, we can prove ownership successfully. Our watermark detection process is shown in Fig. 4 B. Step a. To extract circular patches, we use the SIFT descriptor, as described in Sec. 2. There are several patches in an image, and we try to detect the watermark from all patches. Step b.1. The additive watermarking method in the spatial domain inserts the watermark into the image contents as noise. Therefore, we first apply a Wiener filter to extract this noise by calculating the difference between the watermarked image and its Wiener-filtered image, and then regard that difference as the retrieved watermark. 6 As with the watermark insertion process, we compensate for the modification by perceptual masks, but such compensation does not greatly affect the performance of watermark detection. Step b.2. To measure the similarity between the reference watermark generated during watermark insertion and the retrieved watermark, the retrieved circular watermark should be converted into a rectangular watermark by applying the polar-mapping introduced in Sec Considering the fact that the watermark is inserted symmetrically, we

6 Fig. 5 Histogram of the similarity values and its gamma distribution. take the mean value from the two semicircular areas. By this mapping, the rotation of circular patches is represented as a translation, and hence we achieve rotation invariance for our watermarking scheme. Step b.3. We apply circular convolution to the reference watermark and the retrieved watermark. The degree of similarity between the two, called the response of the watermark detector, is represented by the maximum value of circular convolution as follows: w m,n w * m,n r similarity = max w m,n w m,n 1/2 for r = 0,n, 8 where w is the reference watermark and w* is the retrieved watermark. The range of similarity values is from 1.0 to 1.0. We can identify the rotation angle /r of the patches by finding the r with the maximum value. If the similarity exceeds a predefined threshold, we can be satisfied that the reference watermark has been inserted. The method of determining the threshold is described in the following section. Step c. As mentioned, there are several circular patches in an image. Therefore, if the watermark is detected from at least one patch, ownership is proved, and not otherwise. The fact that we insert the watermark into several circular patches, rather than just one, makes it highly likely that the proposed scheme will detect the watermark, even after image distortions. Our watermarking scheme is robust against geometric distortion attacks as well as signal-processing attacks. Scaling and translation invariance is achieved by extracting circular patches from the SIFT descriptor. Rotation invariance is achieved by using the translation property of the polarmapped circular patches. 3.4 Probability of Error Since ownership is verified by deciding whether or not the similarity exceeds a predefined threshold, the probability that our watermark detector will generate errors depends on what threshold is selected. We should consider both falsepositive and false-negative error rates. The false-positive error rate is the probability that the watermark will be detected when images are not watermarked or are watermarked with other watermarks. The false-negative error rate is the probability that the method will fail to retrieve the inserted watermark from watermarked images. In practice, it is difficult to analyze false-negative errors because of the wide variety of possible attacks. It is thus common to select the threshold based on the false-positive error rate. In order to estimate the false-positive error probability of our watermark detector, we attempted to retrieve 100 random watermarks from 100 randomly collected images that included natural scenes and portraits. Altogether 168,500 circular patches were processed to detect the watermark, because each image contained several patches. The size of the rectangular watermark was 32 by 32 pixels. The histogram of similarity values normalized correlation and its gamma distribution are shown in Fig. 5. In most cases, the simplest way to estimate the error probability is to assume that the distribution of detection values follows a Gaussian, or an approximate Gaussian, distribution model. 14 However, the distribution of the response of our watermark detector follows more nearly a gamma distribution model, since we take the maximum value from circular convolution. The gamma distribution is defined as follows: = 1 f x;, x 1 exp x/, x 0, 0 otherwise, where x is a continuous random variable. The parameters and satisfy 0 and 0 and are calculated using the mean and variance of the random variable x as follows: E x = x =, V x = 2 x = Based on the results, the mean and variance of the detector response x were and e 004, and the values of and were and , respectively. Table 1 shows the probability of our watermark detector generating a false-positive error when using this gamma distribution model and the chosen threshold of the watermark detector

7 Table 1 Error probability of our watermark detector and its threshold. Table 2 Fraction of correctly redetected patches and number of failure images under various attacks. Error probability Threshold Bas et al. 6 Proposed approach Attack Median filter % % 0 Median filter % % 0 Median filter % % 0 4 Simulation Results This section reports on the performance of the proposed watermarking scheme. Two experiments were carried out, to measure 1 the performance of the watermark synchronization based on the SIFT descriptor and 2 the robustness of our watermarking scheme. We used 100 randomly collected images from the internet that included such images as Lena, Baboon, Lake, and Bridge. 4.1 Performance of the SIFT Descriptor We show the performance of our synchronization method based on the SIFT descriptor by comparing it with the method of Bas et al. 6 During watermark detection, the redetection of patches that have been extracted during watermark insertion is important for robustness. In order to measure the redetection, we first extracted circular patches from both the original image and the attacked images, and then compared the locations and radii of the patches from the original image with those of the patches from the attacked images. Prior to the comparison, we transformed the locations and radii of circular patches from images subjected to geometric attack to those of patches in the original image. If the difference between circular patches from the original image and those from attacked images was below 2 pixels, we regarded the patches as having been redetected correctly. We applied various attacks: median filter 2 2, 3 3, and 4 4, JPEG compression quality factor 40, 50, 60, 70, 80, and 90, Gaussian filtering 3 3, additive uniform noise, centered cropping 5%, 10%, 15%, 20%, and 25%, rotation 0.25, 0.5, 1, 5, 10, and 30 deg, and scaling 0.75, 0.9, 1.1, 1.3, and 1.5. The average number of extracted circular patches from 100 original images was in Bas et al. s approach and in our approach. Comparison results for attacks are shown in Table 2. refers to the of the number of extracted patches from original images to the number of correctly redetected patches from attacked images. The detection increases when watermark synchronization is performed more strongly. failure refers to the number of images, among the 100, in which no patch is redetected. As shown in the table, the detection fell in proportion to the strength of attacks for the two approaches, but our synchronization method performs better than Bas et al. s approach for all distortions. In particular, as the strength of attacks becomes greater, the feature JPEG compression % % 0 JPEG compression % % 0 JPEG compression % % 0 JPEG compression % % 0 JPEG compression % % 0 JPEG compression % % 0 Gaussian filter % % 0 Additive uniform noise 56.5% % 0 Centered cropping 5% Centered cropping 10% Centered cropping 15% Centered cropping 20% Centered cropping 25% Rotation 0.25 deg Rotation 0.5 deg Rotation 1 deg Rotation 5 deg Rotation 10 deg 49.3% % % % % % % % % % % % % % % % % % % %

8 Table 2 Continued. Bas et al. 6 Proposed approach Table 3 Fraction of correctly detected watermark patches, number of failure images, and similarity under common signal-processing attacks. Attack Rotation 30 deg 20.1% % 0 Signal-processing attack Similarity Watermarked image no attack 90.2% Median filter % Scaling % % 0 Scaling % % 0 Median filter % Median filter % Scaling % % 0 Scaling % % 0 Scaling % % 0 JPEG compression % JPEG compression % JPEG compression % points of Bas et al. s approach are removed or newly created. The detection of patches generated by considering the relation among the feature points drastically decreased, and the number of failure images increased. 6 However, our method performs reliably, because we consider only local image characteristics. These results support the contention that the SIFT descriptor is a reliable technique for extracting features patches and is useful for robust watermarking against signal-processing attacks and geometric distortion attacks. 4.2 Performance of the Watermarking Scheme We tested the performance of our watermarking scheme. The size of the rectangular watermark was 32 by 32 pixels, and the weighting factors and of the noise visibility function were set at 5.0 and 1.0, respectively. We achieved a 10 9 error probability reliability for the proposed scheme by setting the threshold at and tried to detect the inserted watermark by adjusting the radius of circular patches in the range r 2 to r+2 to compensate for errors of alignment induced by our synchronization method. There are several overlap areas between circular patches, but the inserted watermark in these areas is invisible to the naked eye, because we refined the modification of the pixel by perceptual masking. The overall PSNR values between the original image and the watermarked image were greater than 40 db. We inserted the watermark into circular patches, and images were modified only in parts. As a result, our scheme achieved high PSNR values. In highly textured images, such as the well-known Baboon image, PSNR values were relatively low because we inserted the watermark strongly in view of the fact that the noise was imperceptible. We applied most of the attacks listed in Stirmark 3.1: median filter, JPEG compression, Gaussian filtering, additive uniform noise, linear geometric transformation, random bending, row-column removal, shearing, centered cropping, rotation, scaling, and rotation scaling JPEG compression % JPEG compression % JPEG compression % Gaussian filter % Additive uniform noise 56.0% These attacks attempt to remove or attenuate the inserted watermark and desynchronize its location. Simulation results under attacks are shown in Table 3 and Table 4. refers to the of the number of watermarked patches inserted to the number of correctly detected watermarked patches in watermark detection. Among 100 images, detection failure refers to the number of images where the inserted watermark is not detected from any patches and hence we fail to prove ownership. Similarity is the average of similarity values from correctly detected watermarked patches. In most of the attacks, our watermarking scheme could detect the inserted watermark from a considerable number of circular patches, and the similarity between the inserted and the detected watermark was high enough to prove ownership. When we transform a rectangular watermark into a circular watermark, a pixel of the rectangular watermark is mapped to a homocentric region of a circle, and that compensates for small alignment errors of the circular patches. Consequently, our watermarking scheme was able to survive even in linear geometric transformation, random bending, and shearing attacks. Since we only consider the local properties of features, our scheme could survive cropping attacks. In rotation and rotation scaling

9 Table 4 Fraction of correctly detected watermark patches, number of failure images, and similarity under geometric distortion attacks. Table 4 Continued. Geometric distortion attack Similarity Linear geometric transform % Linear geometric transform % Linear geometric transform % Random bending 58.8% Row 1 and column 1 removal 81.4% Row 1 and column 5 removal 77.6% Row 5 and column 1 removal 76.0% Row 17 and column 5 removal 61.2% Row 5 and column 17 removal 61.4% Geometric distortion attack Similarity Rotation 30 deg 48.6% Rotation 45 deg 45.9% Scaling % Scaling % Scaling % Scaling % Scaling % Scaling % Scaling % Scaling % Scaling % Shearing x 0% y 1% 80.1% Shearing x 0% y 5% 43.9% Shearing x 1% y 0% 81.6% Shearing x 5% y 0% 39.8% Shearing x 1% y 1% 72.6% Centered cropping 5% 73.8% Centered cropping 10% 64.3% Centered cropping 15% 55.7% Centered cropping 20% 49.0% Centered cropping 25% 41.4% Centered cropping 50% 16.2% Rotation 0.25 deg+scaling Rotation 0.5 deg+scaling Rotation 0.75 deg+scaling Rotation 1 deg+scaling Rotation 2 deg+scaling Rotation 5 deg+scaling Rotation 10 deg+scaling Rotation 15 deg+scaling 79.8% % % % % % % % Rotation 0.25 deg 78.8% Rotation 0.5 deg 76.9% Rotation 0.75 deg 75.4% Rotation 1 deg 73.8% Rotation 2 deg 63.0% Rotation 5 deg 68.0% Rotation 10 deg 61.8% Rotation 15 deg 55.9% attacks, the strength of cropping and scaling increased in proportion to the degree of rotation, and hence the detection fell. Although watermark synchronization performs well, when images are attacked strongly by such methods as JPEG compression 40, scaling 0.70, and centered cropping 50%, the additive watermarking method in the spatial domain fails to survive in several images. In these attacks, the inserted watermark is almost removed, attenuated, or partly cropped by distortions. The performances in these circumstances is, nevertheless, more robust than that of other representative content-based schemes such as those of

10 Bas et al. 6 and Tang and Hang. 9 As described in Sec. 4.1, the synchronization method of Bas et al. shows lower performance than our synchronization method. Hence, their watermarking scheme will not perform well. According to the results of Tang and Hang, their approach could survive several signal-processing attacks, such as JPEG compression, Gaussian filtering, and additive uniform noise. However, when images are geometrically distorted or pixels of images are removed by median filtering, row-column removal, or cropping, the detection falls considerably and the probability of detection failure is high. These simulation results support the contention that our proposed watermarking scheme would be resilient to various image attacks. A particular merit of our scheme is that we insert the watermark into an image multiple times, which results in survival of the watermark under attack and hence in proof of ownership. 5 Discussion and Conclusions The difference of matching items in Tables 2 4 represents the number of circular patches that are well synchronized but for which the additive watermarking method fails to detect the inserted watermark. As mentioned before, our watermark synchronization, based on the SIFT descriptor, can extract patches correctly even after image attacks and in complex textured images. However, it is unlikely that the additive watermarking method in the spatial domain can detect the inserted watermark successfully. Figure 6 shows several original images, watermarked images, and residuals between the original and watermarked images. We have modified the histogram of residual images for convenience. As may be seen from Fig. 6 a and 6 b, we inserted the watermark into the images so as not to be visible to the naked eye. As shown in Fig. 6 c, residuals show the locations and radii of the circular patches, how the rectangular watermark is shaped to the homocentric circle, and how the additive watermarking method inserts the watermark in the spatial domain. Our scheme achieves high PSNR values because images are only partly modified. When the image is well textured, the patches are scattered all over it, which allows the watermark to be inserted over the whole image. However, if the texture of images is simple, for example the water area in Milk or the sky area in Lake, the watermark is concentrated in the area near to the object. Drawbacks of the proposed watermarking scheme are related to its vulnerability to large distortion of the aspect. In addition, due to the computation time for the SIFT descriptor and for the compensation of alignment errors, our scheme cannot be used effectively in real-time applications. Future work will focus on eliminating these drawbacks. Our major contribution is that we have proposed a robust watermarking scheme that uses local invariant features. In order to resist geometric distortions, we extracted circular patches using the SIFT descriptor, which is invariant to translation and scaling distortion. These patches were watermarked additively in the spatial domain. Rotation invariance was achieved using the translation property of the polar-mapped circular patches. We performed an intensive simulation, and the results showed that our method would be robust against geometric distortion attacks as well as Fig. 6 a Original image, b watermarked image, and c residual for Lake, Baboon, Milk, and Bridge images. signal-processing attacks. We believe that the considen of local features is important for the design of robust watermarking schemes, and our method is a solution that uses such features. Acknowledgments This work was supported by the Korea Science and Engineering Foundation KOSEF through the Advanced Information Technology Research Center AITrc and by the Ministry of Education and Human Resources Development MOE, the Ministry of Commerce, Industry, and Energy MOCIE, and the Ministry of Labor MOLAB through the fostering project of the Lab of Excellency. References 1. F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn, Attacks on copyright marking systems, in Proc. Int. Workshop on Information Hiding, pp , Springer-Verlag M. Kutter, Watermarking resisting to translation, rotation and scaling, Proc. SPIE 3528, S. Pereira and T. Pun, Robust template matching for affine resistant image watermark, IEEE Trans. Image Process. 9 6, C.-Y. Lin and I. J. Cox, Rotation, scale and translation resilient watermarking for images, IEEE Trans. Image Process. 10 5, J. J. K. O Ruanaidh and T. Pun, Rotation, scale and translation invariant spread spectrum digital image watermarking, Signal Process. 66 3, P. Bas, J.-M. Chassery, and B. Macq, Geometrically invariant wa

11 termarking using feature points, IEEE Trans. Image Process. 11 9, M. Kutter, S. K. Bhattacharjee, and T. Ebrahimi, Toward second genen watermarking schemes, in IEEE Int. Conf. on Image Processing, Vol. 1, pp A. Nikolaidis and I. Pitas, Region-based image watermarking, IEEE Trans. Image Process , C.-W. Tang and H.-M. Hang, A feature-based robust digital image watermarking scheme, IEEE Trans. Signal Process. 51 4, D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 2, K. Mikolajczyk and C. Schmid, Scale and affine invariant interest point detectors, Int. J. Comput. Vis. 60 1, T. Tuytelaars and L. V. Gool, Matching widely separated views based on affine invariant regions, Int. J. Comput. Vis. 59 1, S. Voloshynovskiy, A. Herrigel, N. Baumgartner, and T. Pun, A stochastic approach to content adaptive digital image watermarking, in Proc. Int. Workshop on Information Hiding, pp , Springer-Verlag I. J. Cox, M. L. Miller, and J. A. Bloom, Digital Watermarking, Chap. 5, Morgan Kaufmann, San Francisco Hae-Yeoun Lee received his BS degree in information engineering from Sung Kyun Kwan University, Seoul, Korea, in 1993, and his MS degree in computer science from Korea Advanced Institute of Science and Technology KAIST, Korea, in From 1997 to 2001, he was with the Satellite Technology Research Center SaTReC, KAIST, as a research student. From 2001 to 2005, he was with the Satrec Initiative, a company in Korea, as a senior researcher. He is currently pursuing a PhD degree in computer science at KAIST. His research interests include digital watermarking, multimedia processing, image processing, remote sensing, and digital rights management. Hyungshin Kim received his BS degree in computer science from Korea Advanced Institute of Science and Technology KAIST, in 1990, and his MS degree in satellite communication engineering from the University of Surrey, UK, in From 1992 to 2001, he was with the Satellite Technology Research Center SaTReC, KAIST, as a senior researcher. Since joining the real-time computing laboratory at KAIST in 1994, his research has focused on problems in multimedia signal processing and digital rights management. He is currently a professor at Chungnam National University, Korea. His research interests include digital watermarking, multimedia signal processing, embedded systems, and digital rights management. Heung-Kyu Lee received a BS degree in electronics engineering from Seoul National University, Seoul, Korea, in 1978, and MS and PhD degrees in computer science from Korea Advanced Institute of Science and Technology KAIST, Korea, in 1981 and 1984, respectively. Since 1986 he has been a professor in the Department of Computer Science, KAIST. He has authored or coauthored more than 100 international journal and conference papers. He has been a reviewer for many international journals, including the Journal of Electronic Imaging, Real-Time Imaging, and IEEE Transactions on Circuits and Systems for Video Technology. He was also a program chairman at many international conferences, including the International Workshop on Digital Watermarking IWDW in He is now a director of the Korean DRM Forum. His major interests are digital watermarking, digital fingerprinting, and digital rights management

Rotation, scale, and translation invariant image watermark using higher order spectra Hyung -Shin Kim Yunju Baek Heung-Kyu Lee

Rotation, scale, and translation invariant image watermark using higher order spectra Hyung -Shin Kim Yunju Baek Heung-Kyu Lee Rotation, scale, and translation invariant image watermark using higher order spectra Hyung -Shin Kim Yunju Baek Heung-Kyu Lee Division of Computer Science Department of Electrical Engineering and Computer

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

WITH the advent of Internet, the use of digital media

WITH the advent of Internet, the use of digital media IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 4, APRIL 2006 1537 Image Watermarking Based on Invariant Regions of Scale-Space Representation Jin S. Seo, Associate Member, IEEE, and Chang D. Yoo,

More information

Image watermarking based on scale-space representation

Image watermarking based on scale-space representation Image watermarking based on scale-space representation Jin S. Seo and Chang D. Yoo Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology Daejeon 305-701,

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

Robust Image Watermarking based on DCT-DWT- SVD Method

Robust Image Watermarking based on DCT-DWT- SVD Method Robust Image Watermarking based on DCT-DWT- SVD Sneha Jose Rajesh Cherian Roy, PhD. Sreenesh Shashidharan ABSTRACT Hybrid Image watermarking scheme proposed based on Discrete Cosine Transform (DCT)-Discrete

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

A Robust A DCT-Based Digital Watermarking Scheme Using Important Feature Points

A Robust A DCT-Based Digital Watermarking Scheme Using Important Feature Points A Robust A DCT-Based Digital Watermarking Scheme Using Important Feature Points Xiaojun Qi and Ji Qi Department of Computer Science, Utah State University Logan, UT, 84322-4205, USA. Xiaojun.Qi@usu.edu

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

A Print-Scan Resilient Watermarking based on Fourier Transform and Image Restoration

A Print-Scan Resilient Watermarking based on Fourier Transform and Image Restoration A Print-Scan Resilient Watermarking based on Fourier Transform and Image Restoration R. Riad H. Douzi M. El Hajji R. Harba PRISME, University of Orleans, Orléans, France F. Ros PRISME, University of Orleans,

More information

A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection

A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection A Robust Wavelet-Based Watermarking Algorithm Using Edge Detection John N. Ellinas Abstract In this paper, a robust watermarking algorithm using the wavelet transform and edge detection is presented. The

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Robust Digital Image Watermarking based on complex wavelet transform

Robust Digital Image Watermarking based on complex wavelet transform Robust Digital Image Watermarking based on complex wavelet transform TERZIJA NATAŠA, GEISSELHARDT WALTER Institute of Information Technology University Duisburg-Essen Bismarckstr. 81, 47057 Duisburg GERMANY

More information

Image Watermarking Using Visual Perception Model and Statistical Features

Image Watermarking Using Visual Perception Model and Statistical Features Ms. S.Meenakshi et al. /International Journal of Engineering and Technology Vol.(3),, 94-99 Image Watermarking Using Visual Perception Model and Statistical Features Ms. S.Meenakshi and Mrs.C.Akila University

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

SCALE INVARIANT FEATURE TRANSFORM (SIFT)

SCALE INVARIANT FEATURE TRANSFORM (SIFT) 1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Comparison of Feature Detection and Matching Approaches: SIFT and SURF

Comparison of Feature Detection and Matching Approaches: SIFT and SURF GRD Journals- Global Research and Development Journal for Engineering Volume 2 Issue 4 March 2017 ISSN: 2455-5703 Comparison of Detection and Matching Approaches: SIFT and SURF Darshana Mistry PhD student

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Key properties of local features

Key properties of local features Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract

More information

A WATERMARKING METHOD RESISTANT TO GEOMETRIC ATTACKS

A WATERMARKING METHOD RESISTANT TO GEOMETRIC ATTACKS A WATERMARKING METHOD RESISTANT TO GEOMETRIC ATTACKS D. Simitopoulos, D. Koutsonanos and M.G. Strintzis Informatics and Telematics Institute Thermi-Thessaloniki, Greece. Abstract In this paper, a novel

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Robust biometric image watermarking for fingerprint and face template protection

Robust biometric image watermarking for fingerprint and face template protection Robust biometric image watermarking for fingerprint and face template protection Mayank Vatsa 1, Richa Singh 1, Afzel Noore 1a),MaxM.Houck 2, and Keith Morris 2 1 West Virginia University, Morgantown,

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

A Video Watermarking Algorithm Based on the Human Visual System Properties

A Video Watermarking Algorithm Based on the Human Visual System Properties A Video Watermarking Algorithm Based on the Human Visual System Properties Ji-Young Moon 1 and Yo-Sung Ho 2 1 Samsung Electronics Co., LTD 416, Maetan3-dong, Paldal-gu, Suwon-si, Gyenggi-do, Korea jiyoung.moon@samsung.com

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Combined Hashing/Watermarking Method for Image Authentication

Combined Hashing/Watermarking Method for Image Authentication Combined Hashing/Watermarking Method for Image Authentication Vlado Kitanovski, Dimitar Taskovski, and Sofija Bogdanova Abstract In this paper we present a combined hashing/watermarking method for image

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

An Improved Images Watermarking Scheme Using FABEMD Decomposition and DCT

An Improved Images Watermarking Scheme Using FABEMD Decomposition and DCT An Improved Images Watermarking Scheme Using FABEMD Decomposition and DCT Noura Aherrahrou and Hamid Tairi University Sidi Mohamed Ben Abdellah, Faculty of Sciences, Dhar El mahraz, LIIAN, Department of

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint

More information

Feature Based Registration - Image Alignment

Feature Based Registration - Image Alignment Feature Based Registration - Image Alignment Image Registration Image registration is the process of estimating an optimal transformation between two or more images. Many slides from Alexei Efros http://graphics.cs.cmu.edu/courses/15-463/2007_fall/463.html

More information

A DWT-DFT Composite Watermarking Scheme Robust to Both Affine Transform and JPEG Compression

A DWT-DFT Composite Watermarking Scheme Robust to Both Affine Transform and JPEG Compression 776 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 13, NO 8, AUGUST 2003 A DWT-DFT Composite Watermarking Scheme Robust to Both Affine Transform and JPEG Compression Xiangui Kang,

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES H. I. Saleh 1, M. E. Elhadedy 2, M. A. Ashour 1, M. A. Aboelsaud 3 1 Radiation Engineering Dept., NCRRT, AEA, Egypt. 2 Reactor Dept., NRC,

More information

A Robust Image Watermarking Scheme using Image Moment Normalization

A Robust Image Watermarking Scheme using Image Moment Normalization A Robust Image ing Scheme using Image Moment Normalization Latha Parameswaran, and K. Anbumani Abstract Multimedia security is an incredibly significant area of concern. A number of papers on robust digital

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

A Survey on Face-Sketch Matching Techniques

A Survey on Face-Sketch Matching Techniques A Survey on Face-Sketch Matching Techniques Reshma C Mohan 1, M. Jayamohan 2, Arya Raj S 3 1 Department of Computer Science, SBCEW 2 Department of Computer Science, College of Applied Science 3 Department

More information

Analysis of feature-based geometry invariant watermarking

Analysis of feature-based geometry invariant watermarking Header for SPIE use Analysis of feature-based geometry invariant watermarking Mehmet U. Celik *a, Eli Saber b, Gaurav Sharma b, A.Murat Tekalp a a University of Rochester, Rochester, NY, 14627-0126 b Xerox

More information

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image SURF CSED441:Introduction to Computer Vision (2015S) Lecture6: SURF and HOG Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Speed Up Robust Features (SURF) Simplified version of SIFT Faster computation but

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

A NEW DCT-BASED WATERMARKING METHOD FOR COPYRIGHT PROTECTION OF DIGITAL AUDIO

A NEW DCT-BASED WATERMARKING METHOD FOR COPYRIGHT PROTECTION OF DIGITAL AUDIO International journal of computer science & information Technology (IJCSIT) Vol., No.5, October A NEW DCT-BASED WATERMARKING METHOD FOR COPYRIGHT PROTECTION OF DIGITAL AUDIO Pranab Kumar Dhar *, Mohammad

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

HISTOGRAMS OF ORIENTATIO N GRADIENTS

HISTOGRAMS OF ORIENTATIO N GRADIENTS HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients

More information

Obtaining Feature Correspondences

Obtaining Feature Correspondences Obtaining Feature Correspondences Neill Campbell May 9, 2008 A state-of-the-art system for finding objects in images has recently been developed by David Lowe. The algorithm is termed the Scale-Invariant

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Lecture 10 Detectors and descriptors

Lecture 10 Detectors and descriptors Lecture 10 Detectors and descriptors Properties of detectors Edge detectors Harris DoG Properties of detectors SIFT Shape context Silvio Savarese Lecture 10-26-Feb-14 From the 3D to 2D & vice versa P =

More information

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,

More information

Filtering. -If we denote the original image as f(x,y), then the noisy image can be denoted as f(x,y)+n(x,y) where n(x,y) is a cosine function.

Filtering. -If we denote the original image as f(x,y), then the noisy image can be denoted as f(x,y)+n(x,y) where n(x,y) is a cosine function. Filtering -The image shown below has been generated by adding some noise in the form of a cosine function. -If we denote the original image as f(x,y), then the noisy image can be denoted as f(x,y)+n(x,y)

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

A Comparison and Matching Point Extraction of SIFT and ISIFT

A Comparison and Matching Point Extraction of SIFT and ISIFT A Comparison and Matching Point Extraction of SIFT and ISIFT A. Swapna A. Geetha Devi M.Tech Scholar, PVPSIT, Vijayawada Associate Professor, PVPSIT, Vijayawada bswapna.naveen@gmail.com geetha.agd@gmail.com

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Object Detection by Point Feature Matching using Matlab

Object Detection by Point Feature Matching using Matlab Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,

More information

A ROBUST WATERMARKING SCHEME BASED ON EDGE DETECTION AND CONTRAST SENSITIVITY FUNCTION

A ROBUST WATERMARKING SCHEME BASED ON EDGE DETECTION AND CONTRAST SENSITIVITY FUNCTION A ROBUST WATERMARKING SCHEME BASED ON EDGE DETECTION AND CONTRAST SENSITIVITY FUNCTION John N. Ellinas Department of Electronic Computer Systems,Technological Education Institute of Piraeus, 12244 Egaleo,

More information

A NEW APPROACH OF DIGITAL IMAGE COPYRIGHT PROTECTION USING MULTI-LEVEL DWT ALGORITHM

A NEW APPROACH OF DIGITAL IMAGE COPYRIGHT PROTECTION USING MULTI-LEVEL DWT ALGORITHM A NEW APPROACH OF DIGITAL IMAGE COPYRIGHT PROTECTION USING MULTI-LEVEL DWT ALGORITHM Siva Prasad K, Ganesh Kumar N PG Student [DECS], Assistant professor, Dept of ECE, Thandrapaparaya Institute of science

More information

Local Image Features

Local Image Features Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best

More information

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON WITH S.Shanmugaprabha PG Scholar, Dept of Computer Science & Engineering VMKV Engineering College, Salem India N.Malmurugan Director Sri Ranganathar Institute

More information

Watermarking of Image Using Priority Based On Algorithms

Watermarking of Image Using Priority Based On Algorithms This work by IJARBEST is licensed under Creative Commons Attribution 4.0 International License. Available at https://www.ijarbest.com Watermarking of Image Using Priority Based On Algorithms B.Aarthi,

More information

WATERMARKING FOR LIGHT FIELD RENDERING 1

WATERMARKING FOR LIGHT FIELD RENDERING 1 ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,

More information

A Novel Real-Time Feature Matching Scheme

A Novel Real-Time Feature Matching Scheme Sensors & Transducers, Vol. 165, Issue, February 01, pp. 17-11 Sensors & Transducers 01 by IFSA Publishing, S. L. http://www.sensorsportal.com A Novel Real-Time Feature Matching Scheme Ying Liu, * Hongbo

More information

Xinbo Gao, Senior Member, IEEE, Cheng Deng, Xuelong Li, Senior Member, IEEE, and Dacheng Tao, Member, IEEE /$26.

Xinbo Gao, Senior Member, IEEE, Cheng Deng, Xuelong Li, Senior Member, IEEE, and Dacheng Tao, Member, IEEE /$26. 278 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 3, MAY 2010 Geometric Distortion Insensitive Image Watermarking in Affine Covariant Regions Xinbo Gao,

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

Block Mean Value Based Image Perceptual Hashing for Content Identification

Block Mean Value Based Image Perceptual Hashing for Content Identification Block Mean Value Based Image Perceptual Hashing for Content Identification Abstract. Image perceptual hashing has been proposed to identify or authenticate image contents in a robust way against distortions

More information

A Nonoblivious Image Watermarking System Based on Singular Value Decomposition and Texture Segmentation

A Nonoblivious Image Watermarking System Based on Singular Value Decomposition and Texture Segmentation A Nonoblivious Image Watermarking System Based on Singular Value Decomposition and exture Segmentation Soroosh Rezazadeh, and Mehran Yazdi Abstract In this paper, a robust digital image watermarking scheme

More information

Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection

Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection Robust Zero Watermarking for Still and Similar Images Using a Learning Based Contour Detection Shahryar Ehsaee and Mansour Jamzad (&) Department of Computer Engineering, Sharif University of Technology,

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

Multibit Digital Watermarking Robust against Local Nonlinear Geometrical Distortions

Multibit Digital Watermarking Robust against Local Nonlinear Geometrical Distortions Multibit Digital Watermarking Robust against Local Nonlinear Geometrical Distortions Sviatoslav Voloshynovskiy Frédéric Deguillaume, and Thierry Pun CUI - University of Geneva 24, rue du Général Dufour

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching

Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Performance Evaluation of Scale-Interpolated Hessian-Laplace and Haar Descriptors for Feature Matching Akshay Bhatia, Robert Laganière School of Information Technology and Engineering University of Ottawa

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

FRAGILE WATERMARKING USING SUBBAND CODING

FRAGILE WATERMARKING USING SUBBAND CODING ICCVG 2002 Zakopane, 25-29 Sept. 2002 Roger ŚWIERCZYŃSKI Institute of Electronics and Telecommunication Poznań University of Technology roger@et.put.poznan.pl FRAGILE WATERMARKING USING SUBBAND CODING

More information

Non-Linear Masking based Contrast Enhancement via Illumination Estimation

Non-Linear Masking based Contrast Enhancement via Illumination Estimation https://doi.org/10.2352/issn.2470-1173.2018.13.ipas-389 2018, Society for Imaging Science and Technology Non-Linear Masking based Contrast Enhancement via Illumination Estimation Soonyoung Hong, Minsub

More information

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS Murat Furat Mustafa Oral e-mail: mfurat@cu.edu.tr e-mail: moral@mku.edu.tr Cukurova University, Faculty of Engineering,

More information

Distinctive Image Features from Scale-Invariant Keypoints

Distinctive Image Features from Scale-Invariant Keypoints Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe Computer Science Department University of British Columbia Vancouver, B.C., Canada Draft: Submitted for publication. This version:

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

Adaptive Fuzzy Watermarking for 3D Models

Adaptive Fuzzy Watermarking for 3D Models International Conference on Computational Intelligence and Multimedia Applications 2007 Adaptive Fuzzy Watermarking for 3D Models Mukesh Motwani.*, Nikhil Beke +, Abhijit Bhoite +, Pushkar Apte +, Frederick

More information

A Robust Digital Watermarking Scheme using BTC-PF in Wavelet Domain

A Robust Digital Watermarking Scheme using BTC-PF in Wavelet Domain A Robust Digital Watermarking Scheme using BTC-PF in Wavelet Domain Chinmay Maiti a *, Bibhas Chandra Dhara b a Department of Computer Science & Engineering, College of Engineering & Management, Kolaghat,

More information

An Improved Blind Watermarking Scheme in Wavelet Domain

An Improved Blind Watermarking Scheme in Wavelet Domain An Improved Blind Watermarking Scheme in Wavelet Domain Hai Tao 1, Jasni Mohamad Zain 1, Ahmed N. Abd Alla 2, Wang Jing 1 1 Faculty of Computer Systems and Software Engineering University Malaysia Pahang

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Analysis of Robustness of Digital Watermarking Techniques under Various Attacks

Analysis of Robustness of Digital Watermarking Techniques under Various Attacks International Journal of Engineering Research and Development e-issn : 2278-067X, p-issn : 2278-800X, www.ijerd.com Volume 2, Issue 6 (August 2012), PP. 28-34 Analysis of Robustness of Digital Watermarking

More information

Improved Geometric Warping-Based Watermarking

Improved Geometric Warping-Based Watermarking Improved Geometric Warping-Based Watermarking Dima Pröfrock, Mathias Schlauweg, Erika Müller Institute of Communication Engineering, Faculty of Computer Science and Electrical Engineering, University of

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

Object Recognition Tools for Educational Robots

Object Recognition Tools for Educational Robots Object Recognition Tools for Educational Robots Xinghao Pan Advised by Professor David S. Touretzky Senior Research Thesis School of Computer Science Carnegie Mellon University May 2, 2008 Abstract SIFT

More information