An Adaptve Dgtal Image Watermarkng Based on Image Features n Dscrete Wavelet Transform Doman and General Regresson Neural Network Ayoub Taher Group of IT Engneerng, Payam Noor Unversty, Broujen, Iran ABSTRACT: In ths paper, a new ntellgent, robust and adaptve dgtal watermarkng technque for gray mage based on combnaton of dscrete wavelet transform (DWT), human vsual system (HVS) model and general regresson neural network (GRNN) s proposed. Wavelet coeffcents are analyzed by a HVS model to select sutable coeffcents for embeddng the watermark. The watermark bts are extracted from the watermarked mage by tranng a GRNN. Some statstcal characterstcs of wavelet coeffcents are used n extractng process for better performance and accuracy of watermarkng algorthm. The mplementaton results show that the watermarkng algorthm has very good robustness to all knd of attacks. KEYWORDS: Dgtal Image Watermarkng, Dscrete Wavelet Transform, General Regresson Neural Network, Human Vsual System. 1. INTRODUCTION Dgtal watermarkng s the technque of embeddng nformaton (watermark) nto a carrer sgnal (vdeo, mage, audo, text) such that the watermark can be extracted or detected later for copyrght protecton, content authentcaton, dentty, fngerprntng, copy control and broadcast montorng [1]. The mportant requrements for the watermarkng systems are robustness, transparency, capacty, and securty [], [11]. These requrements can vary under dfferent applcatons. Dgtal watermarkng can be categorzed nto two classes, dependng on the doman of embeddng watermark, () spatal doman watermarkng, and () transformed doman watermarkng [1]. The mplementaton of spatal doman watermarkng s easy and fast, and t usually requres no orgnal mage for watermark extracton. However, t s susceptble to tamperng and sgnal processng attacks such as compresson, addng nose, and flterng. Transformed doman watermarkng offers more robustness under most of the casual sgnal processng attacks. Transformed watermarkng algorthms usually requre the orgnal mage for watermark detecton. Suhal et al. [3] have proposed a robust method of embeddng watermark coeffcents n DCT doman of JPEG compresson process. The watermark s extracted by the comparson of the watermarked mage wth the orgnal mage. Dscrete wavelet transform (DWT) has been ntroduced by Hsu et al. [4] for dgtal mage watermarkng. In ther work, both the host mage and watermark are performed wavelet transform then combned together. Ths method also strctly requres the orgnal mage to detect the watermark. These methods are not feasble for applcatons wth a large mage database or communcatons applcatons. Recently, some researchers have appled neural networks to desgn transparent and robust watermarkng systems whch can detect the watermark wthout requrng the orgnal mage. Gven a network archtecture, a set of tranng nput and the expected output, the network can learn from the tranng set and then can be used to classfy or predct the unseen data [10], [5], and [1]. It s wdely accepted that robust mage watermarkng systems should largely explot the characterstcs of the HVS, for more effectvely hdng a robust watermark [6], [7]. Lews and Knowles [8] proposed a mathematcal model of the HVS that can be constructed to allow the estmaton of nose senstvty for any part of the transformed mage. The study showed that wavelet decomposton closely mmcs the HVS that s very helpful to create a vsual mask for hghly-perceptual compresson applcatons [8]. The HVS has also been studed extensvely by Jayant et al. [9] for sgnal compresson applcatons. In ths work, Just-Notceable Dfference (JND) profle s ntroduced as a vsual maskng based on HVS characterstcs for perceptual codng of sgnals. Page 184
. SCHEME DESIGN Scheme desgn s organzed as follows; Scramblng Watermark s gven n secton A. HVS weghtng functon relatve to DWT technques are dscussed n secton B. The Watermark embeddng process s presented n secton C. The tranng of GRNN s descrbed n secton D. The Watermark extractng process s shown n secton E. A. Watermark Scramblng Orgnal watermark s the logo of company or nsttute where s a black-whte mage wth sze 64 64; the entres of ths mage are zero and one values. Scramblng process can be mplemented n both spatal doman such as color space, poston space, and frequency doman of a dgtal mage, whch s regarded as a cryptographc method to an mage, allows rghtful users to choose proper algorthm and parameters easly. As a result, the llegal decrypton becomes more dffcult, and securty of the watermark more strengthened. Scramblng mage n spatal doman s to change correlaton between pxels, leadng to the mage beyond recognton, but mantan the same hstogram. In a practcal applcaton, the scramblng algorthm wth small computaton and hgh scramblng degree s needed. Ths paper apples the famous toral Automorphsm mappng, Arnold transformaton [13], whch was put forward by V.I.Arnold when he was researchng rng endomorphsm, a specal case of toral Automorphsm. Arnold transformaton s descrbed as the followng formula: x ' 1 ' 1 y 1x mod y Where y s the coordnates of a pont n the plane, and x,y s the ones after beng transformed. The constant, 64 s relevant to orgnal watermark mage sze. Arnold transformaton changes the layout of an mage by changng the coordnates of the mage, so as to scramble the mage. Furthermore, the transformaton wth a perodcty lke T, the watermark mage goes back to ts orgnal state after T transformatons. In the recoverng process, the transformaton can scatter damaged pxel bts to reduce the vsual mpact and mprove the vsual effect, whch s often used to scramble the watermark mage. In ths paper, the perodcty T s for 4, scramblng process s dsplayed as the followng Fgure 1(a) ~ (d), whch are orgnal watermark mage, 6, 1, and 4 Arnold transformng effect. For T, here s for 4, the 4 transformng s equvalent to the recoverng effect. Let T=k1+k, Scramblng the watermark mage k1 tmes before embeddng t, then after extractng scrambled watermark form watermark mage, k tmes of transformaton can recover the orgnal extracted watermark, where k1, and k are secret keys. After scramblng watermark mage, t s arranged to one dmensonal array W (k), where k=1, 64 64 64 (1) Fgure 1: Image effect after beng Arnold transformed B. DWT coeffcents and HVS model The wavelet transform s based on small waves. It was n 1987 when the wavelets became the base of the multresoluton analyss. In two-dmensonal DWT, each level of decomposton produces four bands of data, one correspondng to the low pass band (LL), and three other correspondng to horzontal (HL), vertcal (LH), and dagonal (HH) hgh pass bands. The decomposed mage shows an approxmaton mage n the lowest resoluton low pass band, and three detal mages n hgher bands. The low pass band can further be decomposed to obtan another level of decomposton. The proposed method uses the wavelet doman n frequency doman technques. Because, compared to DCT and DFT the wavelet transform s performed a mult resoluton analyss s good localzaton n frequency doman and DWT s hgher flexblty. Fgure shows three levels of decomposton. Studes n [7]-[9] have shown that the human eye s: less senstve to nose n hgh resoluton bands, less senstve to nose n those areas of the mage where brghtness s hgh or low, less senstve to nose n hghly texture areas but, among these, more senstve near the edges. Fgure : Three level decomposton of -D DWT coeffcents Page 185
In the proposed method of ths paper, the mage s subdvded nto non-overlapped blocks wth 8x8 sze, each block s transformed by 3-level DWT, and the 8x8 DWT coeffcents block are ganed. For each transformed mage block, the sub band LL1 s select for watermark embeddng. In three decomposton level DWT, the sub band LL1 s decomposed to sub bands LL3, HL3, LH3, HH3, HL, LH, and HH. Ths sub band has 4 4 sze, whch s dvded to four overlapped 3 3 sub blocks of coeffcents. Fgure 3 shows the organzaton of sub blocks n the DWT coeffcents block, the crcles shows the center coeffcent of each sub blocks. Each sub block denoted as B (,, (=0,1,j=0,1) and the coordnates of each coeffcent n t, denoted as B (,j,, (x=0,1,,y=0,1,) The watermark bt nsert n center coeffcent of each sub block B (,, that has the desred HVS weghtng functon value. Fgure 3: Organzaton of four 3 3 coeffcents sub blocks n LL1 sub band, the crcles shows the center coeffcent n each sub block. To adapt the watermarkng system to the local propertes of the mage, we use the quantzaton model based on HVS n [8], [7] to calculate the weghtng functon for each sub block, Wf(,. [11]. Wf (, 1 56 x, y B(, x 0, y 0 j, Var B(, j, x 0,1, y 0,1, The sutable sub blocks B(, for embeddng are chosen by comparng the center coeffcent value B(, wth ts correspondng weghtng functon value Wf(, gven by C. Watermark embeddng algorthm 1 B (, { B(, : B(, Wf (, } (3) 4 The scrambled watermark bt W (k) s embedded n center coeffcent of selected sub block B (, as the followng relaton: B (, B (, (W ( k) 1) (4). Wf (, ) B (, Where B(, j, 1, 1) s the center coeffcent of selected sub block, W (k) s the scrambled watermark bt, and Wf(, s the correspondng HVS weghtng functon value for ths sub block. The constant α s the watermarkng factor, relatve to robustness and transparency of algorthm. The sutable value for α s ganed by practcal mplementaton and experence. There s no need to save the postons of watermark embeddng, f the watermarked mage n extractng process has the acceptable PSNR (Peak Sgnal to Nose rato), the HVS weghtng functon values recover vctorously [11], and t s the other postve pont for algorthm. After embeddng all of watermark bt n mage blocks, IDWT s performed for each mage block, and the watermarked mage s obtaned. Fgure 4 shows the dagram of embeddng algorthm. (). Page 186
D. General regresson neural network Fgure 4: Watermark embeddng dagram The GRNN, proposed by Donald F. Specht n [10], s specal network n the category of probablstc neural networks (PNN). GRNN s a one-pass learnng algorthm wth a hghly parallel structure. Ths makes GRNN a powerful tool to do predctons and comparsons of large data sets. A block dagram of GRNN s llustrated n Fgure 5 Fgure 5: GRNN structure The nput unts are the dstrbuton unts. There s no calculaton at ths layer. It just dstrbutes the entre measurement varable X to all of the neurons n the pattern unts layer. The pattern unts frst calculate the cluster center of the nput Page 187
vector, X. When a new vector X s entered the network, t s subtracted from the correspondng stored cluster center. The square dfferences are summed and fed nto the actvaton functon f(x), and are gven by D T ( X X ).( X X ) (5) The sgnal of the pattern neuron gong to the numerator neuron s weghted wth the correspondng values of the observed values (target values), Y, to obtan the output value of the numerator neuron Y N (X ). The weghts on the sgnals gong to the denumerator neuron are one, and the output value of the denumerator neuron s Y D (X ) The output of the GRNN s gven by relaton (9). Y Y N D ( X ) ( X ) D f ( X ) exp( ) p 1 p 1 Y f ( X ) f ( X ) (6) (7) (8) Y Y ( X ) Y N D (9) In GRNN, only the standard devaton or smooth parameter, σ, the kernel wdth of Gaussan functon s subject for a search [10]. In our work, the GRNN has 9 nput neurons, 9 pattern neurons, summaton neurons for a numerator neuron and a denumerator neuron, and 1 output neuron. The detal how ths GRNN works s descrbed n the next secton. E. Watermark Extractng Process The dagram of process s shown n Fgure 6. The pont of extractng procedure s the feature extracton of the relatonshp among the neghbor wavelet coeffcents. The relatonshps among wavelet coeffcents wthn selected 3x3 sub blocks n watermarkng embedded poston are treated as the tranng sets. Smlar to embeddng algorthm, the watermarked mage s dvded to 8 8 non overlappng blocks; three-level DWT s performed for each mage block, n each DWT coeffcent block, four overlappng 3 3 sub blocks are reorganzed, based on the HVS weghtng functon value, the sub blocks have specal features, are detected. The reason for usng GRNN s the tranng speed of t, because of block by block watermarked mage processng, the speed of algorthm can be mproved. The nput vector of GRNN can be constructng as follows: X { p (, j,, x 0,1,, y 0,1,} (10)., p 1,,...,9 Where (, B (, Avg B (, (11) (, j, B (, B (, j, (1).,( (1,1) 1 x y (13) Avg (,,, ) B j x y B (, B (, 8 x 0 y 0 The desred output for GRNN s Y p ( B (, (, ( B (, (, f f W ( k) 1, W ( k) 0 (14) ' 1 W ( k) 0 ' Y k 0 otherwse (15) Page 188
In relaton (14), W (k) s the orgnal scrambled watermark bt, n the other words, for ganng watermark bt n extracton process need to have orgnal scrambled watermark. The extracted watermark bt s calculated based on relaton (15).In ths relaton, Y s the fnal GRNN output for detected DWT coeffcents sub block B (,. After ' k recoverng entre watermark bts, W s the extracted watermark sequence, whch s descrambled by Arnold transform and key k, and the extracted watermark logo mage can be obtaned. Fgure 6: Watermark extractng dagram 3. IMPLEMENTATION RESULTS The orgnal and watermarked mages wth sze 51 51 have been shown n Fgure 7 and Fgure 8. Gold Hll, mage has been used to mplement the watermarkng algorthm. Orgnal Watermark s a bnary mage and ts sze s 64 64. The orgnal watermark mage s shown n Fgure 9. Extracted watermarks after some knd of attack on mentoned watermarked mages have been shown n Fgure 10. The performed attacks on the watermarked mages are as follows: Gaussan nose; medan flterng 33; low pass flterng; and reszng 1/5 the mage; jpeg compresson wth qualty factors of 10, 5, 50, and 90 and fnally jpeg 000 compresson wth bt rate 3. Fgure 7: Orgnal Gold Hll mage. Fgure 8: Watermarked Gold Hll mage. Page 189
The estmate of smlarty between the extracted watermark mage and the orgnal watermark mage accordng to relaton (16), along the peak sgnal to nose rato (PSNR) of watermarked mage and Orgnal mage, to relaton (17), were calculated havng performed each ' ' W. W SIM ( W, W ) W. W PSNR 10 log(, j 55 I(, I w ) (, (16) (17) one of the mentoned attacks on the watermarked mage, and results have been ntegrated n table (1). In relaton (16) W s the orgnal watermark and W s the Extracted logo watermark mage. Dot operaton n ths relaton s explanatory sum of product of respectve entres between matrx W and W. Square operaton s explanatory sum of product of each entry of matrx W wth tself. Fgure 9: Orgnal Watermark Fgure 10: Extracted logo watermarks after some knds of watermarkng attack on the watermarked mage. TABLE 1: IMPLEMENTATION RESULTS AND COMPARISONS Knd of attack Our method Method n [14] SIM PSNR SIM PSNR Gaussan Nose 97.1 31.0 93.5 31.44 Low Pass Flter 95.0 9.5 - - Medan Pass Flter 90.5 31.5 85.95 3.7 Scalng 1/5 83.6 4. 88.1 8.5 JPEG 90% 98.5 4.7 91. 37.7 JPEG 50% 96.7 41.3 89.4 33.5 JPEG 5% 9.4 36.1 86.3 9.8 JPEG 10% 89.3 30.0 81. 4.1 JPEG 000 wth bt rate 3 84. 3.9 - - Page 190
Perodcals: REFERENCES [1]. Mn Wu and Bede Lu, Data hdng n mage and vdeo: Part 1 Fundamental ssues and solutons, IEEE. Trans. Image Processng. vol.1, no. 6, pp. 685-695, Jun 003. []. Benot Macq, Jana Dttmann, and Edward J. Delp, Benchmarkng of mage watermarkng algorthms for dgtal rghts management, Proc. IEEE, vol. 9, no. 6, June 004. [3]. Mohamed A. Suhal and Mohammad S. Obadat Dgtal watermarkng-based DCT and JPEG model, IEEE. Trans. Intrum.Meas vol. 5, no. 5, pp. 1640-1647, October 003. [4]. Chou-Tng Hsu and Ja-Lng Wu, Multresoluton watermarkng for dgtal mages, IEEE Trans. Crcuts. Systems, vol. 45, no. 8, pp. 1097-1101, August 1998. [5]. Pao-Ta Yu, Hung-Hsu Tsa, and Jyh-Shyan Ln, Dgtal watermarkng based on neural networks for color mages, ELSEVIER Sgnal Processng Journal, vol. 81, pp. 663-671, October 001. [6]. Ahmed H. Tewfk and Mtchell Swanson, Data hdng for multmeda personalzaton, nteracton, and protecton, IEEE Sgnal Processng Magazne, vol. 111, pp. 41-44, July 1997. [7]. Mauro Barn, Franco Bartoln, and Alessandro Pva, Improved wavelet-based watermarkng through pxelwse maskng, IEEE Trans. Image Processng, vol. 10, no. 5, pp. 783-791, May 001. [8]. A. S. Lews, and G. Knowles, Image compresson usng the -D wavelet transform, IEEE Trans. Image Processng., vol. 1, no., pp. 44-50, Aprl 199. [9]. Nkl Jayant, James Johnston, and Robert Safranek, Sgnal compresson based on models of human percepton, Proc. IEEE, vol. 81, no. 10, Oct 1993. [10]. Donald F. Specht, A general regresson neural network, IEEE Trans. Neural Networks, vol., no. 6, November 1991. Books: [11]. Jeng-Shyang Pan, Hsang-Cheh Huang, and Lakhm C.Jan, Intellgent Watermarkng Technques. New Jersey, MA: World Scenstfc, 004, p. 85. [1]. Smon Haykn, Neural Networks: A Comprehensve Foundaton, Second Ed. Pearson, 1999, p. 83. [13]. Ed. Pearson, 1999, p. 83.D.K. Arrowsmth and C.M.Place, An Introducton to Dynamcal systems, Cambrdge Unv. Press 1990. Papers from Conference Proceedngs (Publshed): [14]. Qun-tng Yang, Te-gang Gao, L Fan, A Novel Robust Watermarkng Based on Neural Network, Intellgent Computng and Integrated Systems, Internatonal Conference On page(s): 71-75, 010. Page 191