Advances in Natural and Applied Sciences. Efficient Illumination Correction for Camera Captured Image Documents

Similar documents
Robust Phase-Based Features Extracted From Image By A Binarization Technique

ISSN Vol.03,Issue.02, June-2015, Pages:

RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE

IJSER. Abstract : Image binarization is the process of separation of image pixel values as background and as a foreground. We

SCENE TEXT BINARIZATION AND RECOGNITION

Binarization of Document Images: A Comprehensive Review

Text line Segmentation of Curved Document Images

An ICA based Approach for Complex Color Scene Text Binarization

An Objective Evaluation Methodology for Handwritten Image Document Binarization Techniques

EUSIPCO

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 7, JULY 2014

IMPROVING DEGRADED ANCIENT DOCUMENT IMAGES USING PHASE-BASED BINARIZATION MODEL

A QR code identification technology in package auto-sorting system

A proposed optimum threshold level for document image binarization

New Binarization Approach Based on Text Block Extraction

Binarization of Degraded Historical Document Images

Effect of Pre-Processing on Binarization

Document image binarisation using Markov Field Model

An Efficient Character Segmentation Based on VNP Algorithm

OCR For Handwritten Marathi Script

A Quantitative Approach for Textural Image Segmentation with Median Filter

Text Detection in Indoor/Outdoor Scene Images

SECTION 5 IMAGE PROCESSING 2

SCIENCE & TECHNOLOGY

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

Image Enhancement Techniques for Fingerprint Identification

A NOVEL BINARIZATION METHOD FOR QR-CODES UNDER ILL ILLUMINATED LIGHTINGS

K S Prasanna Kumar et al,int.j.computer Techology & Applications,Vol 3 (1),

Effective Features of Remote Sensing Image Classification Using Interactive Adaptive Thresholding Method

Fingerprint Image Enhancement Algorithm and Performance Evaluation

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

CITS 4402 Computer Vision

Best Combination of Binarization Methods for License Plate Character Segmentation

Finger Print Enhancement Using Minutiae Based Algorithm

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations

Introduction to Digital Image Processing

Image Enhancement with Statistical Estimation

Comparative Study of ROI Extraction of Palmprint

A Modified SVD-DCT Method for Enhancement of Low Contrast Satellite Images

Document Image Binarization Using Post Processing Method

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Scene Text Detection Using Machine Learning Classifiers

Biomedical Image Analysis. Mathematical Morphology

EE 584 MACHINE VISION

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters

Local Contrast and Mean based Thresholding Technique in Image Binarization

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Image Processing Lecture 10

Morphological Image Processing

Random spatial sampling and majority voting based image thresholding

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Fingerprint Verification applying Invariant Moments

Handwritten Devanagari Character Recognition Model Using Neural Network

Locating 1-D Bar Codes in DCT-Domain

Resolution Magnification Technique for Satellite Images Using DT- CWT and NLM

Image restoration. Restoration: Enhancement:

Filtering Images. Contents

LECTURE 6 TEXT PROCESSING

EE663 Image Processing Histogram Equalization I

EE795: Computer Vision and Intelligent Systems

Efficient binarization technique for severely degraded document images

Texture Image Segmentation using FCM

Restoring Chinese Documents Images Based on Text Boundary Lines

Edges Are Not Just Steps

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Motion Detection Algorithm

AUTOMATIC LOGO EXTRACTION FROM DOCUMENT IMAGES

Effects Of Shadow On Canny Edge Detection through a camera

Modeling Adaptive Degraded Document Image Binarization and Optical Character System

Historical Handwritten Document Image Segmentation Using Background Light Intensity Normalization

arxiv:cs/ v1 [cs.cv] 12 Feb 2006

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers

Hybrid filters for medical image reconstruction

Artifacts and Textured Region Detection

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Basic relations between pixels (Chapter 2)

OFFLINE SIGNATURE VERIFICATION

Adaptive Local Thresholding for Fluorescence Cell Micrographs

A ROBUST DISCRIMINANT CLASSIFIER TO MAKE MATERIAL CLASSIFICATION MORE EFFICIENT

Thresholding of Text Documents

Filters. Advanced and Special Topics: Filters. Filters

Image denoising using curvelet transform: an approach for edge preservation

Image Restoration and Reconstruction

Restoring Warped Document Image Based on Text Line Correction

TEXTURE ANALYSIS USING GABOR FILTERS

Edge and local feature detection - 2. Importance of edge detection in computer vision

Image Enhancement in Spatial Domain (Chapter 3)

Optical Character Recognition (OCR) for Printed Devnagari Script Using Artificial Neural Network

International Journal of Electrical, Electronics ISSN No. (Online): and Computer Engineering 3(2): 85-90(2014)

A New iterative triclass thresholding technique for Image Segmentation

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Object Shape Recognition in Image for Machine Vision Application

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Text Enhancement with Asymmetric Filter for Video OCR. Datong Chen, Kim Shearer and Hervé Bourlard

A NEW HYBRID DIFFERENTIAL FILTER FOR MOTION DETECTION

Anno accademico 2006/2007. Davide Migliore

SSRG International Journal of Computer Science and Engineering (SSRG-IJCSE) volume1 issue7 September 2014

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Transcription:

AENSI Journals Advances in Natural and Applied Sciences ISSN:1995-0772 EISSN: 1998-1090 Journal home page: www.aensiweb.com/anas Efficient Illumination Correction for Camera Captured Image Documents 1 Saranraj and 2 Venkateswaran 1 Department of Electronics and Communication Engineering, SSN College Of Engineering, Chennai, India 2 Department of Electronics and Communication Engineering, SSN College Of Engineering, Chennai, India A R T I C L E I N F O Article history: Received 12 October 2014 Received in revised form 26 December 2014 Accepted 1 January 2015 Available online 25 February 2015 Keywords: Document image processing, Illumination correction, Image reconstruction. A B S T R A C T Background: With the advent of modern devices like cellular phones with cameras and PDA s, the document imaging obtained by digitizing paper documents has gained more interest. Moreover, the availability of inexpensive devices that is compact, easy to use, fast and portable devices that are more suitable for capturing documents popularized the use of document images. However it posses new challenges, that can affect the performance of any document image processing issue. In this paper we propose a illumination correction technique that uses phase congruency features and an adaptive thresholding scheme to correct the illumination variation and to preserve the foreground text information. Extensive simulation results on various types of document image show that the proposed method performs well especially in the camera captured image documents. 2015 AENSI Publisher All rights reserved. To Cite This Article: Saranraj and Venkateswaran., Efficient Illumination Correction for Camera Captured Image Documents. Adv. in Nat. Appl. Sci., 9(6): 391-396, 2015 INTRODUCTION The quantity of paper is doubling each year and has risen to trillion. The industries are oriented to produce and reproduce paper documents by the way of digitizing and storage. Document imaging aims at reducing the piles of paper by digitizing them. It also reduces the storage cost and enhances the retrieval speed. Document imaging provides easy access to the electronic replicas of documents and reduces storage cost. In document image processing, the paper documents are initially scanned and stored in the hard disk or any other required location for further processing. The final outcome of document image processing has to be in compatible electronic format, which enables the documents easier and quicker to access. Document image processing comprises of a set of simple techniques and procedures, which are used to work on the images of documents and convert them from pixel information into a format that can be read by a computer. Many research studies have been carried out to solve the problems that arise in the binarization of document images characterized by degradations The process of Document Imaging includes the management of paper documents, records and forms. It can also retrieve the text information present in it and distribute them electronically. The Document image after processing is exactly digitized replica of the original documents and allows document preservation. It can be economically stored, efficiently searched and browsed, quickly transmitted, and coherently linked together and hence they are superior to paper documents.these document images can also be retrieved and manipulated from a remote location. The documents can be remotely retrieved by multiple users and manipulated using existing information technology. Moreover, unlimited number of hard copy printouts can be made for the user s convenience. The camera based image can have several degradations due to uneven illumination, blur, poor resolution, poor contrast, shadows, and perspective distortion and 3D deformations. These degradations can reduce performance of any optical character recognition based system. Choosing the higher threshold value can remove this degradation, but most of the degraded parts are set as black due to strong variation between the foreground and background regions. Therefore in the proposed method, it is overcome by separately improving the text quality as well as removing the background noise and combines these results to get the correctly illuminated image. Corresponding Author: Saranraj, Department of Electronics and Communication Engineering, SSN College Of Engineering, Chennai, India

392 Saranraj and Venkateswaran, 2015 1. Previous work: Some selected previous works that perform binarization document images are given below. The common procedure is to divide the document image pixels into three sets namely foreground pixels, background pixels and uncertain pixels. Then a classifier is applied to those uncertain pixels that classifies into foreground and background, based on the pre-selected foreground and background sets. Nafchi et al. proposed the phase based binarization of antient documents (Nafchi, Moghaddam, and Cheriet, 2014) which uses phase congruency features (denoising to preserve the phase information and phase congruency map to detect the edges of the text as well as removes background noise) and in the post processing step adaptive Gaussian and median filter is used to improve the quality of the text documents. Peng et al. proposed MRF based binarization algorithm (Peng, Setlur, Govindaraju and Sitaram, 2010) a non linear function is used to enhance the quality of document image, hen edge potential to use to preserve the strokes of the foreground text as well as removing noise and then intensity features are used to smoothen the entire document image. (Syed, Faisal, Thomas, Breuel, 2009) uses two different parameters for foreground and background regions. The rough estimation of foreground regions is obtained by ridges detection. Then this information is used to calculate the appropriate threshold using different set of free parameters. (Gatos, Pratikakis, and Perantonis, 2008) combines several state-of-the-art binarization methodologies as well as the efficient incorporation of the edge information of the gray scale source image. An enhancement step based on mathematical morphology operations is also involved in order to produce a high quality result while preserving stroke information. (Gatos, Pratikakis, and Perantonis, 2006) proposed a combination of global and local adaptive binarization method that is limited to binarizing handwritten document images only. (Moghaddam and Cheriet, 2010) proposed a multi-scale binarization framework, in which the input document is binarized several times using different scales and then combined to form the final output image. (Howe, 2011) presents a Markov Random Field (MRF) model approach for the binarization of seriously degraded manuscript. Depending on the available information, the model parameters (clique potentials) are learned from training data. Expectation maximization (EM) algorithm is used to extract text and background features. However, many state-of-the-art methods are particularly suitable for the document images that suffer from certain specific type of image degradation or have certain specific type of image characteristics. The above methods cannot give satisfactory performance with all types of documents and degradation. 2. Proposed Method: The proposed system consists of the following operations in the captured images. 1. Adaptive thresholding, 2. Phase congruency map and 3. Post-processing. Adaptive Thresholding: In this process, foreground text information is preserved by using Bradley s adaptive thresholding method (Bradley and Roth, 2007). It has been found that it performs well on images having non uniform illumination; it compares the image pixels with the average of surrounding pixels. In this method, if the value of pixel lowers than the average is set as black and remaining pixels as white. The main problem in this method is the neighborhood samples are not evenly distributed in all directions. In order to overcome the poor contrast and non uniform illumination, Bradley compute the integral image prior to the binarization as it reduces the changes in the illumination and adjust the contrast. Then the M X M average of the integral image is computed followed by thresholding is applied in order to binarize the document image. The integral image is calculated from the input image using I x, y f x, y I x 1, y I x, y 1 I x 1, y 1 where f(x,y) is the input image and I(x,y) is the integral image obtained as sum of all f(x,y) term to the left and above the pixel (x,y)shown in Fig. 1. Once we compute the integral image, the average of all the pixels in the window size M X M is calculated using SUM I x2, y2 I x2, y1 1 I x1 1, y2 I x1 1, y1 1 where (x1,y1) is the upper left corner and (x2,y2) is the lower right corner of the window ( the value of M is chosen as 1/8 of the image width. Then the thresholding is applied based on the value of T calculated as T= SUM 1 t, 2 < t <20 100 The resulting images obtained are shown in Fig.1.

393 Saranraj and Venkateswaran, 2015 Fig. 1: Results of adaptive thresholding: a) Input image, b) Integral imagec) S x S average image, d) After thresholding. Phase congruency map: In this section, the background noise in the document image is removed by using the edge map obtained from the input image. The gradient based methods used for edge and corner detection do not perform very well on images having non uniform illumination and poor contrast. Also, threshold selection is very sensitive to image illumination and contrast. To minimize these problems a new feature is chosen that is maximally invariant to illumination and scale. In this work we propose the phase congruency features which are based on the Peter Kovesi (Peter, 1999) model. It is a robust method to detect edges and corners. Phase congruency s robustness to image variations stems from the multi-scale and multi-orientation approach. It uses number of 2D log-gabor filters with different scales. Then a binary image is formed by choosing a global threshold. This global threshold has been obtained from Otsu global thresholding method (Otsu, 1979). The phase congruency map is obtained using kovesi s model. Let and E n and O denotes even and odd symmetric log - Gabor wavelet n at a scale n, multiplying those wavelets with the image. R n x = f x E n I n x = f x O n where R n x and I n x are the real and imaginary parts of a complex valued log Gabor wavelet. The local phase φ (x) and local amplitude A x is computed by using the wavelet responses. φ (x) = arctan2(r n x, I n x ) A x = (R n (x) 2 + I n (x) 2 The weighting mean function w(x) is defined as 1 w x = 1 + eγ(c S x ) S x = 1 N A(x) ρ A max (x) where S(x) is the fractional measure of spread, N denotes the total number of filter scales; A max denotes the amplitude of the filter pair with maximum response; C is the cut-off value and γ is the gain factor. The phase deviation function is given by φ = cos φ ρ x φ x sin φ ρ x φ x φ x indicates the mean phase angle. The local energy E(x) and phase congruency PC(x) is given by E(x) =A(x)* φ

394 Saranraj and Venkateswaran, 2015 W x E x PC(x) = n A x The phase congruency is very sensitive to noise, so it is modeled using Rayleigh distribution, μ R and σ R are the mean and standard deviation of the distribution and the noise threshold is given by T = μ R + kσ R The equation of phase congruency is modified with small constant ε to avoid divide by zero error is given by W x E x T PC(x) = n A x + ε The phase congruency map is shown in the Fig.2. Post-processing: In this section the resultant images from the previous steps are combined to get the illumination corrected document image and its quality is further improved by using image opening operation which is shown in Fig.2d. It performs morphological opening on binary image with the disk shaped structuring element. The radius of the structuring element is chosen with respect to the document image. If more number of gaps in the resultant image, radius will be 3 or 4, for moderate quality image, radius may be chosen as less than 3. The morphological open operation is erosion followed by dilation that uses the same structuring element. Fig. 2: Results from phase congruency: a) Phase congruency map, b) Binarization of, c) Inverse of, d) Output image after image opening. 3. Results: The proposed method is tested with different machine printed and hand written document images captured using mobile camera. The image captured that are generally non-uniformly illuminated in the form of shadows and poor contrast as shown in the Fig. 3. The performance of the proposed method is evaluated and compared against the results of the different state-of-art methods using camera captured document image shown in the Fig. 4. A. Otsu s method: The result obtained by using Otsu s method (Otsu, 1979) is shown in the Fig. 4. Because of selecting threshold for the whole image the most of the degraded parts are set as zero, the text information in those places are lost, also this method is not suitable for some types of degradations. B. Niblack s method: This method assumes that if the pixel value is more than that of the threshold defined below, the pixels are considered as object and the rest of the pixels are background (Niblack, 1986). T µ K where µ is mean, σ is standard deviation and K is constant assumed as -0.1. The result obtained from this method is shown in Fig. 4c.

395 Saranraj and Venkateswaran, 2015 Fig. 3: Results of our proposed method: a) Machine printed document image and its result. b) Hand written document image and its result. C. Sauvola s method: The threshold used in this method (Sauvola and M. Pietikinen, 2000) is defined by T µ * 1 k / R 1 where µ is mean, σ is standard deviation and K is constant assumed as -0.005. The result obtained is shown in Fig. 4e. D. Bradley s method: This method is same as that of the Sauvola s method, it computes only the mean. The variance is calculated by using the formulae. Variance X E X ^ 2 E X ^ 2 It saves almost 50% of time compared to the other methods listed above and provides better results compared to others as shown in Fig. 4d. But none of these methods are able to remove all types of degradations; our proposed method provides better results even when the document image is affected by degradations such as non uniform illumination, shadows, poor contrast and blur.

396 Saranraj and Venkateswaran, 2015 (e) (f) Fig. 4: Results of the different state-of-art methods: a) Input image; b) Otsu s method; c) Niblack s method; d) Bradley s method; e) Sauvola s method; f) Proposed method. 4. Conclusion: In this paper we presented a new method for illumination correction in a camera captured image documents. The foreground text information is preserved by adaptive thresholding using integral image and background noise is removed by using the binarized image of phase congruency map. Then image opening operation is used to improve the quality of the resultant image. By individually enhancing the text regions as well as removing the background noise this method performs well in removing illumination variation of the image documents. REFERENCES Bradley, D. and G. Roth, 2007. Adaptive thresholding using the integral image, J. Graph. Tools, 12(2): 13-21. Gatos, B., I. Pratikakis and S. Perantonis, 2006. Adaptive degraded document image binarization, Pattern Recognition, 39(3): 317-327. Gatos, B., I. Pratikakis and S. Perantonis, 2008. Improved document image binarization by using a combination of multiple binarization techniques and adapted edge information, in Proc. ICPR, 1-4. Howe, N., 2011. Document image binarization using Markov field model, in Proc. ICDAR, 6-10. Kovesi, P., 1999. Image features from phase congruency, Videre, J. Comput. Vis. Res., 1(3): 1-26. Moghaddam, R.F. and M. Cheriet, 2010. A multi-scale framework for adaptive binarization of degraded document images, Pattern Recognition., 43(6): 2186-2198. Moghaddam, R.F. and M. Cheriet, 2012. AdOtsu: An adaptive and parameter- less generalization of Otsu s method for document image binarization,pattern Recognition, 45(6): 2419-2431. Nafchi, H.Z., R.F. Moghaddam and M. Cheriet, 2014. Phase-Based Binarization of Ancient Document Images: Model and Application, IEEE Trans. Image Process, 23(7): 2916-2930. Niblack, W., 1986. An Introduction to Digital Image Processing, Englewood Cliffs, NJ, USA: Prentice- Hall. Ntirogiannis, K., B. Gatos and I. Pratikakis, 2014. A combined approach for the binarization of handwritten document images, Pattern Recognition, 35: 3-15. Otsu, N., 1979. A threshold selection method from gray-level histograms, IEEE Trans. Syst., Man, Cybern. Syst., 9(1): 62-66. Peng, X., S. Setlur, V. Govindaraju and R. Sitaram, 2010. Markov Random Field Based Binarization for Hand-held Devices Captured Document Images, In Proc. ICVGIP, pp: 71-76. Saranraj, S. and N. Venkateswaran, 2014. Enhancement of mobile camera captured document image with phase preservation, in Proc. ICCNT, 68-73. Sauvola, J. and M. Pietikinen, 2000. Adaptive document image binarization, Pattern Recognit., 33(2): 225-236. Syed Saqib Bukhari, Faisal Shafait, M. Thomas and Breuel, 2009. Adaptive Binarization of Unconstrained Hand-Held Camera-Captured Document Images, Journal of Universal Computer Science, 15(18): 3343-3363.