Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch

Similar documents
TECHNICAL POINTS ABOUT ADAPTIVE STEGANOGRAPHY BY ORACLE (ASO) 161, rue Ada, 34095, Montpellier Cedex 05, France

Edge Detection in Noisy Images Using the Support Vector Machines

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

(a) Input data X n. (b) VersNet. (c) Output data Y n. (d) Supervsed data D n. Fg. 2 Illustraton of tranng for proposed CNN. 2. Related Work In segment

Distortion Function Designing for JPEG Steganography with Uncompressed Side-image

High-Boost Mesh Filtering for 3-D Shape Enhancement

ALEXNET FEATURE EXTRACTION AND MULTI-KERNEL LEARNING FOR OBJECT- ORIENTED CLASSIFICATION

Audio Event Detection and classification using extended R-FCN Approach. Kaiwu Wang, Liping Yang, Bin Yang

Hybrid Non-Blind Color Image Watermarking

A Fusion Steganographic Algorithm Based on Faster R-CNN

Lecture 13: High-dimensional Images

Research and Application of Fingerprint Recognition Based on MATLAB

Cluster Analysis of Electrical Behavior

High Payload Reversible Data Hiding Scheme Using Difference Segmentation and Histogram Shifting

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Neural Networks in Statistical Anomaly Intrusion Detection

Comparing Image Representations for Training a Convolutional Neural Network to Classify Gender

Machine Learning 9. week

Enhanced Watermarking Technique for Color Images using Visual Cryptography

Classifying Acoustic Transient Signals Using Artificial Intelligence

A Deflected Grid-based Algorithm for Clustering Analysis

The Research of Support Vector Machine in Agricultural Data Classification

Brushlet Features for Texture Image Retrieval

Comparison Study of Textural Descriptors for Training Neural Network Classifiers

Research of Image Recognition Algorithm Based on Depth Learning

Convolutional Neural Network- based Human Recognition for Vision Occupancy Sensors

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

A Background Subtraction for a Vision-based User Interface *

Key-Selective Patchwork Method for Audio Watermarking

An Ensemble Learning algorithm for Blind Signal Separation Problem

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Semi-Fragile Watermarking Scheme for Authentication of JPEG Images

INTELLECT SENSING OF NEURAL NETWORK THAT TRAINED TO CLASSIFY COMPLEX SIGNALS. Reznik A. Galinskaya A.

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Six-Band HDTV Camera System for Color Reproduction Based on Spectral Information

CLASSIFICATION OF ULTRASONIC SIGNALS

Article Reversible Dual-Image-Based Hiding Scheme Using Block Folding Technique

Novel Fuzzy logic Based Edge Detection Technique

Joint Example-based Depth Map Super-Resolution

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

The Research of Ellipse Parameter Fitting Algorithm of Ultrasonic Imaging Logging in the Casing Hole

Research Article A High-Order CFS Algorithm for Clustering Big Data

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Lecture 5: Multilayer Perceptrons

arxiv: v1 [cs.sd] 22 Dec 2017

Classifier Selection Based on Data Complexity Measures *

Discriminative Dictionary Learning with Pairwise Constraints

An Improved Image Segmentation Algorithm Based on the Otsu Method

Understanding the difficulty of training deep feedforward neural networks

A Binarization Algorithm specialized on Document Images and Photos

A NEW FUZZY C-MEANS BASED SEGMENTATION STRATEGY. APPLICATIONS TO LIP REGION IDENTIFICATION

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Machine Learning. Topic 6: Clustering

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Bootstrapping Color Constancy

USING LINEAR REGRESSION FOR THE AUTOMATION OF SUPERVISED CLASSIFICATION IN MULTITEMPORAL IMAGES

The stream cipher MICKEY-128 (version 1) Algorithm specification issue 1.0

Monte Carlo Rendering

Texture Feature Extraction Inspired by Natural Vision System and HMAX Algorithm

Dropout: A Simple Way to Prevent Neural Networks from Overfitting

A fast algorithm for color image segmentation

Detecting MP3Stego using Calibrated Side Information Features

Load Balancing for Hex-Cell Interconnection Network

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

[33]. As we have seen there are different algorithms for compressing the speech. The

Finite Element Analysis of Rubber Sealing Ring Resilience Behavior Qu Jia 1,a, Chen Geng 1,b and Yang Yuwei 2,c

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION

Sixth Indian Conference on Computer Vision, Graphics & Image Processing

Using Counter-propagation Neural Network for Digital Audio Watermarking

Iris recognition algorithm based on point covering of high-dimensional space and neural network

A Multi-step Strategy for Shape Similarity Search In Kamon Image Database

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

Performance Assessment and Fault Diagnosis for Hydraulic Pump Based on WPT and SOM

Information Hiding Watermarking Detection Technique by PSNR and RGB Intensity

CS 534: Computer Vision Model Fitting

CS 268: Lecture 8 Router Support for Congestion Control

Improving anti-spam filtering, based on Naive Bayesian and neural networks in multi-agent filters

International Conference on Applied Science and Engineering Innovation (ASEI 2015)

Hyperspectral Image Classification Based on Local Binary Patterns and PCANet

Application of adaptive MRF based on region in segmentation of microscopic image

Scale Selective Extended Local Binary Pattern For Texture Classification

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

Data Hiding and Image Authentication for Color-Palette Images

Application of Learning Machine Methods to 3 D Object Modeling

Feature-Based Matrix Factorization

Vanishing Hull. Jinhui Hu, Suya You, Ulrich Neumann University of Southern California {jinhuihu,suyay,

Adaptive digital watermarking of images using Genetic Algorithm

SIGGRAPH Interactive Image Cutout. Interactive Graph Cut. Interactive Graph Cut. Interactive Graph Cut. Hard Constraints. Lazy Snapping.

Unsupervised Learning

A Hybrid Digital Image Watermarking based on Discrete Wavelet Transform, Discrete Cosine Transform, and General Regression Neural Network

Optimized Region Competition Algorithm Applied to the Segmentation of Artificial Muscles in Stereoscopic Images

Wavefront Reconstructor

Deep Spatial-Temporal Joint Feature Representation for Video Object Detection

Identify the Attack in Embedded Image with Steganalysis Detection Method by PSNR and RGB Intensity

The Improved K-nearest Neighbor Solder Joints Defect Detection Meiju Liu1, a, Lingyan Li1, b *and Wenbo Guo1, c

Transcription:

Deep learnng s a good steganalyss tool when embeddng key s reused for dfferent mages, even f there s a cover source-msmatch Lonel PIBRE 2,3, Jérôme PASQUET 2,3, Dno IENCO 2,3, Marc CHAUMONT 1,2,3 (1) Unversty of Nîmes, France (2) Unversty Montpeller, France (3) CNRS, Montpeller, France March 3, 2016 Meda Watermarkng, Securty, and Forenscs, IS&T Int. Symp. on Electronc Imagng, SF, Calforna, USA, 14-18 Feb. 2016. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 1 / 23

The bg promse of CNN... Superlatves: lots of enthusasm, fresh deas, amazng results,... RM+EC CNN FNN Max 24.93% 7.94% 8.92% Mn 24.21% 7.01% 8.44% Varance 0.14 0.12 0.16 Average 24.67% 7.4% 8.66% Table: Steganalyss results (P E ) wth S-UNIWARD, 0.4 bpp, clarvoyant scenaro, for RM+EC, CNN, and FNN But... expermental setup was artfcal... Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 2 / 23

Outlne 1 CNN 2 Story 3 Experences 4 Concluson Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 3 / 23

An example of Convoluton Neural Network Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Inspred from Krzhevsky et al. 2012 Network, Detecton percentage only 3% to 4% lower than EC + RM. A. Krzhevsky, I. Sutskever, and G. E. Hnton, ImageNet Classfcaton wth Deep Convolutonal Neural Networks, n Advances n Neural Informaton Processng Systems 25, NIPS 2012. Ynlong Qan, Jng Dong, We Wang, and Tenu Tan, Deep Learnng for Steganalyss va Convolutonal Neural Networks, n Proceedngs of SPIE Meda Watermarkng, Securty, and Forenscs 2015. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 4 / 23

Convoluton Neural Network: Prelmnary flter 1 2 2 2 1 F (0) = 1 2 6 8 6 2 12 2 8 12 8 2 2 6 8 6 2 1 2 2 2 1 CNNs converge much slower wthout ths prelmnary hgh-pass flterng. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 5 / 23

Convoluton Neural Network: Layers Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Insde one layer; successve steps: a convoluton step, the applcaton of an actvaton functon, a step, a normalzaton step. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 6 / 23

Convoluton Neural Network: Convolutons Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Frst layer: Other layers: Ĩ (1) k = I (0) F (1) k. (1) Ĩ (l) =K (l 1) k = =1 I (l 1) F (l) k,, (2) Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 7 / 23

Convoluton Neural Network: Actvaton Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Possble actvaton functons: absolute functon f (x) = x, sne functon f (x) = snus(x), functon as n the Qan et al. network f (x) = e x2 σ 2, (for Rectfed Lnear Unts): f (x) = max(0, x) as n our work, Hyperbolc tangent: f (x) = tanh(x) Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 8 / 23

Convoluton Neural Network: Poolng Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Poolng s a local operaton computed on a neghborhood: local average (preserve the sgnal), or local maxmum (translaton nvarance). + a sub-samplng operaton. For our artfcal experments, the was not necessary. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 9 / 23

Convoluton Neural Network: Normalzaton Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Case where normalzaton s done across the maps: norm(i (1) k (x, y)) = ( 1 + α sze I (1) k (x, y) k =mn(k,k sze/2 +sze) k =max(0,k sze/2 ) (I (1) k (x, y)) 2 ) β Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 10 / 23

Convoluton Neural Network: Fully Connected Network Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. three layers. a softmax functon normalzes values between [0, 1]. the network delvers a value for cover (resp. for stego). Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 11 / 23

Our CNN Convolutonal layers Classfcaton 128 128 neurons neurons 256 Kerrnel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 16 feature maps 16 kernels 124 x 124 Layer 2 16 feature maps 16 kernels 61 x 61 Layer 3 16 feature maps 16 kernels 29 x 29 Layer 4 16 kernels 16 feature maps Layer 5 16 feature maps 13 x 13 16 kernels 4 x 4 Fully connected layers Softmax Fgure: Qan et al. Convolutonal Neural Network. Convolutonal layers Classfcaton 1000 1000 neurons neurons 256 Kernel 256 mage F (0) 252 Fltered 252 mage Label=0/1 Layer 1 64 feature maps 64 kernels 127 x 127 7x7 strde 2 Layer 2 16 feature maps 16 kernels 127 x 127 Fully connected layers Softmax Fgure: Our Convolutonal Neural Network. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 12 / 23

Outlne 1 CNN 2 Story 3 Experences 4 Concluson Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 13 / 23

The story of that paper... Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 14 / 23

Outlne 1 CNN 2 Story 3 Experences 4 Concluson Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 15 / 23

Experences 40 000 mages of sze 256 256 from BOSSBase, S-UNIWARD at 0.4 bts per pxels, Same embeddng key and use of the smulator, learnng on 60 000 mages, Why usng the same key? We dd not want to do that... Documentaton error n the C++ S-UNIWARD software, Qan et al. 2015 have also msled. We dscovered ths key problem the 23th of December 2015... Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 16 / 23

About the smulator Probabltes for modfyng a pxel x wth {1...n} are: where p ( ) p (0) p (+) = exp( λρ( ) ) Z = exp( λρ(0) ) Z = exp( λρ(+) ) Z, for a 1 modfcaton,, for no modfcaton,, for a +1 modfcaton, {ρ ( ) }, {ρ (0) }, and {ρ (+) } are the changng costs, λ s obtaned n order to respect the payload constrant, Z = exp( λρ ( ) ) + exp( λρ (0) ) + exp( λρ (+) ). Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 17 / 23

Usng the same key... Probabltes for modfyng pxel x : p ( ) = exp( λρ( ) ) Z, p (0) = exp( λρ(0) ) Z What happen when usng the same key..., and p (+) = exp( λρ(+) ). Z The embeddng key ntalze the Pseudo-Random-Number-Sequence Generator, Whatever the mage, the Pseudo Random Number Sequence [0, 1] n s the same, The sequence s used to sample the dstrbuton (see probabltes), Whatever the mage, some poston wll be most of the tme always modfed, and always wth the same polarty (-1 or +1)... Ths stuaton s artfcal!!! Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 18 / 23

Illustraton on the cropped BOSSBase database. Fgure: Probablty of change. In whte the most probable stes and n black the less probable ones. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 19 / 23

Our best CNN n that artfcal scenaro 40 000 mages of sze 256 256 from BOSSBase, S-UNIWARD at 0.4 bts, Same embeddng key and use of the smulator, learnng on 60 000 mages, RM+EC CNN FNN Max 24.93% 7.94% 8.92% Mn 24.21% 7.01% 8.44% Varance 0.14 0.12 0.16 Average 24.67% 7.4% 8.66% Table: Steganalyss results (P E ) wth S-UNIWARD, 0.4 bpp, clarvoyant scenaro, for RM+EC, CNN, and FNN But... expermental setup was artfcal... Note that wth dfferent embeddng keys, the same CNN structure n 2 layers, and wth more neurons, the probablty of error s 38.1%... There s stll hope! Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 20 / 23

Concluson on that story Be careful to the software s mplementatons! Be careful to use dfferent keys for embeddng! Be careful: the smulator only does a smulaton (dfferent from STC), Rch Models are under-effcent to detect the spatal phenomenons, You wll also fnd n the paper: Explanaton/dscusson on CNN, behavor of a CNN, A dscusson on embeddng keys, The presentaton of the LIRMMBase. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 21 / 23

End of talk CNN s not dead...... there s stll thngs to do... Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 22 / 23

About LIRMMBase LIRMMBase: A database bult from a mx of Columba, Dresden, Photex, and Rase databases, and whose mages do not come from the same cameras as the BOSSBase database., L. Pbre, J. Pasquet, D. Ienco, and M. Chaumont, LIRMM Laboratory, Montpeller, France, June 2015, Webste: www.lrmm.fr/ chaumont/lirmmbase.html. Marc CHAUMONT About Deep Learnng and other thngs... March 3, 2016 23 / 23