An Integrated Tool for Virtual Restoration of Byzantine Icons

Similar documents
Generic Face Alignment Using an Improved Active Shape Model

Linear Discriminant Analysis for 3D Face Recognition System

Digital Vision Face recognition

An Automatic Face Identification System Using Flexible Appearance Models

Evaluating the Performance of Face-Aging Algorithms

Facial Expression Recognition Using Non-negative Matrix Factorization

Face Alignment Under Various Poses and Expressions

Face Hallucination Based on Eigentransformation Learning

Texture Image Segmentation using FCM

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

Estimating Human Pose in Images. Navraj Singh December 11, 2009

A Survey of Light Source Detection Methods

Component-based Face Recognition with 3D Morphable Models

Sparse Shape Registration for Occluded Facial Feature Localization

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Learning based face hallucination techniques: A survey

Semi-Supervised PCA-based Face Recognition Using Self-Training

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz Supplemental Material

Gender Classification Technique Based on Facial Features using Neural Network

3D Active Appearance Model for Aligning Faces in 2D Images

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Face View Synthesis Across Large Angles

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

Occluded Facial Expression Tracking

Facial Feature Extraction Based On FPD and GLCM Algorithms

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

Active Wavelet Networks for Face Alignment

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Component-based Face Recognition with 3D Morphable Models

A Hierarchical Face Identification System Based on Facial Components

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

Cost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling

Automatic Shadow Removal by Illuminance in HSV Color Space

Graph Matching Iris Image Blocks with Local Binary Pattern

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Color Local Texture Features Based Face Recognition

Active Appearance Models

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

Learning to Recognize Faces in Realistic Conditions

Recognition of Animal Skin Texture Attributes in the Wild. Amey Dharwadker (aap2174) Kai Zhang (kz2213)

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Model-Based Face Computation

On Modeling Variations for Face Authentication

Face Recognition using Eigenfaces SMAI Course Project

Segmentation and Tracking of Partial Planar Templates

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Journal of Industrial Engineering Research

Postprint.

The Processing of Form Documents

String distance for automatic image classification

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

Gaussian Mixture Model Coupled with Independent Component Analysis for Palmprint Verification

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Digital Makeup Face Generation

A Novel Extreme Point Selection Algorithm in SIFT

Ensemble of Bayesian Filters for Loop Closure Detection

2.1 Optimized Importance Map

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Iris Recognition for Eyelash Detection Using Gabor Filter

Selecting Models from Videos for Appearance-Based Face Recognition

arxiv: v1 [cs.cv] 16 Nov 2015

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Semi-Automatic Techniques for Generating BIM Façade Models of Historic Buildings

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR

Human pose estimation using Active Shape Models

Why study Computer Vision?

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Dynamic Human Shape Description and Characterization

An Adaptive Threshold LBP Algorithm for Face Recognition

CHAPTER 1 INTRODUCTION

Combining PGMs and Discriminative Models for Upper Body Pose Detection

Supplementary Material Estimating Correspondences of Deformable Objects In-the-wild

A Morphable Model for the Synthesis of 3D Faces

A THREE LAYERED MODEL TO PERFORM CHARACTER RECOGNITION FOR NOISY IMAGES

Face Objects Detection in still images using Viola-Jones Algorithm through MATLAB TOOLS

Learning a Manifold as an Atlas Supplementary Material

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

CS 231A Computer Vision (Winter 2014) Problem Set 3

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Factorization with Missing and Noisy Data

Real time facial expression recognition from image sequences using Support Vector Machines

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS

Facial Feature Detection

Facial Expression Detection Using Implemented (PCA) Algorithm

Image Compression and Resizing Using Improved Seam Carving for Retinal Images

CS 231A Computer Vision (Fall 2012) Problem Set 3

FACE RECOGNITION USING INDEPENDENT COMPONENT

A Statistical Consistency Check for the Space Carving Algorithm.

A Survey on Feature Extraction Techniques for Palmprint Identification

Transcription:

An Integrated Tool for Virtual Restoration of Byzantine Icons Anastasios Maronidis, Chrysanthos Voutounos and Andreas Lanitis Visual Media Computing Lab Dept. of Multimedia and Graphic Arts Cyprus University of Technology P.O. Box 50329, 3036 Lemesos, Cyprus {anastasios.maronidis, c.voutounos, andreas.lanitis}@cut.ac.cy Abstract An integrated tool that can be used for damage detection, shape restoration and texture restoration of faces appearing in Byzantine icons, is presented. The damage detection process involves the estimation of residuals obtained after the coding and reconstruction of face image regions using trained Principal Component Analysis (PCA) texture models. Shape restoration is accomplished using a model-based approach that employs a 3D shape model generated by taking into account a set of geometrical rules adopted by Byzantine style iconographers. Texture restoration is performed using a customized version of the recursive PCA technique. For this purpose dedicated PCA texture models representing different categories of faces appearing in icons, are used. All methods developed as part of the project are incorporated into a userfriendly application, which can be utilized by both amateurs and professionals. Indicative visual and quantitative results show the potential of the developed application. Keywords-Occlusion Detection; Shape Restoration; Texture Restotration; Icon Restoration; Cultural Heritage Preservation I. INTRODUCTION Byzantine art refers to the artistic style associated with Byzantine Empire. A large number of Byzantine icons and frescoes showing different Saints, dating back to the 15th century, can be found in churches and monasteries in Eastern Europe. However, in many occasions Byzantine icons of high historical importance are obscured by several types of distortion caused either by deliberate human actions or by physical causes like humidity, high temperatures, earthquakes, etc., (see Fig. 1). Figure 1. Examples of damaged icons. In this paper we present an integrated application that can be used for digital restoration of faces appearing in icons. The proposed methodology is focused on the restoration of facial regions in icons restoration of the remaining parts can be performed using conventional image retouching methods. The proposed work involves the development of digital image processing techniques that can be used for predicting the overall appearance of faces appearing in icons, even in the cases that such faces are highly damaged. The anticipated results of our work will enable the generation of digitally restored icons leaving in that way the original icons unaffected. Our work in this area contributes towards the virtual preservation of Byzantine cultural heritage. The proposed application consists of several modules that include landmark annotation, shape restoration, damage detection and texture restoration. The landmark annotation module enables the location of a number of visible landmarks on the raw face using a semi-automatic method. The remaining landmarks that correspond to the damaged facial regions are recovered by the shape restoration module. The damage detection module takes the shape-recovered face as input and locates the position and extent of the damage. Finally, the damage-detected icon constitutes the input to the texture restoration module where the texture of the damaged regions is recovered so that the final restored icon is synthesized. The main modules described above are depicted with bold in the block diagram that is provided in Fig. 2. The underlying processes related to each one of them are also contained in Fig. 2 and will be described in the remainder of this paper. In our previous work [1], [2], different aspects of each module were presented and tested. Based on the results from our previous experimentation in the area, in this paper we present improved versions of the algorithms involved. In addition, we also present the integration of all modules into a single tool that can be used either by trained conservators or other individuals interested in icon restoration such as painters, art historians and museum curators. Our work in this area was influenced by the work of professional Byzantine icon conservators that was observed and studied during the process of designing the restoration tool. In addition, our work was influenced by previous research efforts in the area of detecting and eliminating occlusions on human face images [3], [4]. However, because faces in Byzantine icons are governed by unique geometrical and chromatic rules, human face image processing algorithms had to be customized for dealing with the unique case of Byzantine style. To the best of our knowledge, this is the first time that a customized integrated tool for Byzantine icon restoration is presented. The remainder of this paper is organized as follows: In Section II a literature review of related bibliography is presented. The shape restoration, damage detection and texture restoration modules are described in Section III, IV

and V, respectively. The results of validation experiments are presented in Section VI. In Section VII, the integrated tool that combines all the modules is described and in Section VIII conclusions and plans for future work are outlined. Figure 2. Block diagram of the main modules along with the underlying processes related to them. II. LITERATURE REVIEW In this section, a brief literature review of previous work on facial occlusion detection and facial texture restoration is presented. Park et al. [3] and Wang et al. [4] describe a method for removing glasses based on the recursive Principal Component Analysis (PCA) method. The occluded regions are enhanced by processing the difference between a PCA-reconstructed and original image. In an iterative procedure, the occluded regions detected in the difference image are replaced by the corresponding pixels from the mean image among the training set until the eyeglasses are removed. Colombo et al. [5] address the problem of occlusion detection and appearance restoration of 3D faces. Occlusions are coarsely detected by examining the local curvature of the 3D face model and a variation of the recursive PCA method is then employed for restoring the appearance of the occluded facial regions. A method based on a 2D morphable face model is proposed in [6] for reconstructing a partially occluded face. Using the undamaged region, the PCA coefficients that minimize the shape and texture difference between an out of sample face image and basis images of the undamaged region are estimated. The optimal coefficients are applied in the occluded regions of the basis images for restoring the occluded facial regions. Occlusion detection methods have also been used for face recognition. In a probabilistic framework, Smet et al. [7] attempt to fit a 3D morphable model to the occluded 2D face image while minimizing the interference of occlusions by using a visibility map. In [8], a separate eigenspace is constructed for different parts of the face. The subspace that represents the localization error of the face is modeled using a mixture of Gaussians. Then Mahalanobis distance associated to the above mixture of Gaussians is used to find the best local match. In [9] Support Vector Machines are used for classifying the skin color and the non-skin color in a face allowing in that way the detection of occluded regions. Kim et al. [10] also utilize an occlusion detection method based on skincolor classification method. In a Bayesian framework, Venkat et al. [11] decompose the face image in sub-regions. For each sub-region, they learn its conditional probability of influencing the face recognition task allowing in that way the recognition of occluded faces. In [12] a Gabor local feature representation combined with sparse representation based classification is used for face recognition. Moreover, an associated Gabor occlusion dictionary computing technique is proposed for dealing with the occluded face recognition problem. III. SHAPE RESTORATION The 2D shape of an icon is defined by a set of 68 landmark points. When an icon is damaged, its shape is represented by the coordinates of the landmarks corresponding to the non-damaged regions. Restoring the shape of an icon involves the recovery of the position of the landmarks that correspond to the damaged regions. A key aspect of our work is the use of a 3D deformable face shape model suitable for representing the geometry of Byzantine faces, the so-called Byzantine Style Specific Model (BSSM). The use of this model enables the shape restoration of a raw icon in a way that guarantees the Byzantine style preservation during the process of shape restoration. The main problem in generating a BSSM is that Byzantine faces are depicted only in 2D icons. Hence, it is not possible to train a BSSM using a set of real 3D samples. Instead, a generic 3D deformable human face model is tuned toward the representation of Byzantine faces. The approach adopted for training a BSSM is outlined in the following subsections. A. Generic Statistical 3D Face Model A training set with 60 laser-scanned 3D human face instances is used for training a generic 3D deformable model [13]. The trained model serves as the basis for a reversible face shape coding scheme where it is possible to code 3D face shapes into a small number of model parameters (about 50) and to generate novel 3D face shapes by setting values to the model parameters. B. Byzantine Specific Statistical 3D Model A BSSM is generated by enforcing rule-based constraints on the model trained using human faces. In this study, a set of 23 geometric Byzantine iconography rules acquired from the related literature [14] form the set of rules used for applying constraints to the human face model. The 23 rules were chosen because they provide an adequate description of the geometric form adapted in Byzantine icons. Some indicative rules are given below: Rule 1: The distance between the lower and the upper edge of the eye is equal to one third of the nose-length. Rule 2: The length of the mouth is in the range between one third to one half of the nose-length. Rule 3: The nose width from wing to wing, the distance between the two upper eyelashes and the distance between the inner edges of the eyes are equal. For each of the 23 geometric rules, a discrepancy metric is defined based on distances between vertices associated with each rule so that given a 3D face it is possible to estimate the deviation of the face from the rule considered.

Along these lines the total discrepancy is estimated as the weighted sum of the discrepancies of each of the 23 rules considered. The weights used during the calculation of the total discrepancy were estimated by taking into account the specificity of each rule to the Byzantine icon style. During the process of weight estimation, a dataset with 39 faces extracted from Byzantine icons and a dataset with 76 human faces were used. Images of human faces used in the experiment, were selected so that they display facial expressions and poses similar to the ones encountered in Byzantine icons. All faces from both datasets have been annotated with 68 landmark points and 3D reconstructed using the generic statistical 3D model. For each 3D Byzantine face we calculated its discrepancy for every rule. We then collected these discrepancies and estimated their distributions across the Byzantine icon and human face training sets. For each rule, we calculated the Mahalanobis distance between the distributions of discrepancies between the Byzantine and human samples. The estimated distances among the datasets were used for establishing the weights for each rule so that rules for which the discrepancy between human and Byzantine faces was maximized, received more attention during the estimation of the total discrepancy. In order to generate a BSSM, a number of random synthetic samples are generated by providing random parameter values to the generic 3D shape model. The total weighted discrepancy of each of these samples is calculated based on the Byzantine geometric rules and the distribution of the weighted discrepancies is estimated. By setting a discrepancy threshold as the 3% left side tail of the distribution, the separation of the samples into those that comply and those that do not comply with the Byzantine rules is achieved. The Byzantine compliant samples constitute the training set, for training a BSSM. C. Shape Restoration Using BSSM The synthetic instance of a BSSM whose 3D landmarks when projected to the 2D plane best fit the visible 68 2D landmarks of the raw face comprises the 3D reconstruction of the face. The missing 2D landmarks are restored by projecting the corresponding 3D instance back to the 2D plane. According to the results of a quantitative evaluation [2], the use of a BSSM during the process of shape restoration ensures that the Byzantine style is preserved. An example of the shape restoration approach is illustrated in Fig. 7, (b). IV. DAMAGE DETECTION Recovering the exact position of the 68 2D landmarks via the shape restoration process enables the detection of the position and extent of the damage in the icon. In [15], a set of non-occluded and a set of artificially occluded faces is coded into PCA texture model parameters and reconstructed. The residual values between the original and the reconstructed images are calculated at each pixel for the two sets. Using the distributions of these residuals, each pixel is classified as occluded or non-occluded. The methodology presented in [15] has been extended in an attempt to be customized for detecting the damage on Byzantine icons. Variations of the main methodology with respect to the size of the texture models, the sampling method and the classification approach have been investigated [1], so that the occlusion detection methods that perform better are used in the proposed icon restoration tool. In this section we provide a brief overview of the basic occlusion detection methodology and outline the results of a comparative experimental evaluation. A. Basic Methodology Given a dataset of non-damaged Byzantine faces, a texture PCA model is trained [16]. This model, serves as a space within which a raw face can be represented as a vector that contains a set of parameters. Reversely, a vector that belongs to this space can be mapped as a face in the initial face space. The model can compactly code and reconstruct only non-damaged faces, since it has been trained using only such faces. When attempting to code and reconstruct a damaged face, where the damaged regions display textural variation not encountered in the training set, the reconstructed face should differ from the original. Therefore the differences between the intensities of original pixel and the corresponding reconstructed ones (i.e. the residuals) are expected to be larger in the damaged regions. This evidence is exploited in order to efficiently detect the damages on an icon. The basic method consists of the training and detection phases as outlined below. 1) Training Phase The training phase requires a dataset of non-damaged Byzantine icons. For each icon, a set of 68 landmarks is firstly located in order to outline the shape of the faces in the training set. Using these landmarks, the face in each icon is warped into the mean shape, producing a corresponding shape-free face. Using the set of the warped non-damaged faces, a PCA texture model is trained. In a next step, the non-damaged data samples are corrupted by overlaying occlusions in the form of uniform noise, Gaussian noise or constant RGB value producing a second dataset containing artificially occluded icons. For each dataset separately, the samples are coded into the texture PCA model parameters and subsequently reconstructed and the residuals at each pixel are calculated. The above procedure results in two residual distributions per pixel: the non-damaged pixel and the damaged pixel residual distribution. 2) Detection Phase Given a damaged raw face along with the set of 68 2D landmarks, as this has been recovered during the shape restoration process, the raw face is warped to the mean reference face extracted from the training phase. Based on the residual values obtained from the face codereconstruction scheme, each pixel is classified to the damaged or non-damaged distribution allowing in that way the detection of possible occlusions. B. Modified Methodology Several variations of the basic method presented above, were evaluated experimentally in [1]. In particular, variations

of the basic methods related to the size of the local facial regions considered, the sampling method and the classification algorithm were considered. Variations related to the local region include test cases where the shape-free facial region is treated as a single region (Holistic approach), cases where the shape-free region is divided into multiple non-overlapping local patches and cases where the shape-free region is divided into multiple overlapping local patches. Moreover, two different sampling schemes are investigated: the canonical and the interlaced scheme. The canonical sampling involves the extraction of all intensity values within the region of interest (ROI). The interlaced scheme decomposes the ROI into two pixel-vectors: the odd and the even vector and substitutes the odd pixels with the corresponding pixels of the mean local region derived from the training set. For classifying the pixels to damaged or non-damaged, a voting rule and a Support Vector Machine (SVM) are used. When the voting rule is used, a patch is classified as damaged or non-damaged, according to the majority pixellabels. When an SVM is used, each patch is considered as a vector and an SVM is trained using these vectors for all training samples. An out of sample patch is then assigned as damaged or non-damaged based on the corresponding trained SVM classifier. The most efficient method for damage detection in Byzantine icons was determined through a comprehensive experimental evaluation. Within this framework icons were artificially occluded with regions of different sizes and different intensities. The accuracy of damage detection of each method was determined by considering the ratio of the number of correctly classified pixels over the total number of pixels in the facial region. According to the results, the Holistic approach utilizing the interlaced scheme outperforms the rest of the methods with a damage detection accuracy of 92.6%. More details related to the damage detection methods, experimental evaluation and results are presented in [1]. V. TEXTURE RESTORATION Once the damaged pixels on a raw icon are detected, the texture restoration process is initiated by utilizing the recursive PCA technique [3], [4]. This technique combines the texture information from the non-damaged pixels of the face along with the texture information of a trained PCA texture model in order to predict the texture of the damaged pixels. Texture restoration is performed on the 3D reconstructed face so that the proposed methods can be applied to faces of varying orientations. The restored 3D face is finally rendered onto the 2D plane. A. Recursive PCA A texture PCA model is trained using 101 3D reconstructed non-damaged icons. This model forms the basis for the reversible coding of an arbitrary icon into a number of model parameters. During the application of the recursive PCA method, the most similar sample to the raw icon from the training dataset is located by comparing the texture difference between the non-damaged regions of the raw icon and each data sample. The texture of the damaged regions of the raw icon is then replaced by the corresponding texture of the selected sample. This step guarantees a reasonable initialization of the restoration process. In a second step, the whole texture of the new icon is coded into the model parameters and reconstructed back to the initial face space. The difference between the texture that belongs to the non-damaged regions of the reconstructed and the original face is calculated. If this difference is minimized, the process is completed. Otherwise, the texture of the non-damaged regions of the resulting face is replaced by the texture of the original face and the process is recurred. The basic methodology that has been described in the previous subsection uses the whole set of icons for training a global texture PCA model. An alternative method that utilizes sub-sets of the training set that contain samples most similar to the given images, is also implemented. Within this context, subsets showing different Saints are used for training models that are more specific to the given icon. VI. VALIDATION EXPERIMENTS We have conducted a series of experiments for validating the performance of the presented methodology for restoring Byzantine icons. 1) BSSM Validation The ability of using a BSSM for shape restoration is heavily based on the efficient enforcement of Byzantine style geometric rules that allow the estimation of the discrepancy of a 3D face shape from Byzantine faces. An experiment that aims to assess the specificity of the weighted discrepancy metric to Byzantine faces was staged. For each Byzantine and real face from the image sets described in section III.B we estimated both the mean weighted discrepancy (as defined in section III.B) and the mean non-weighted discrepancy, that is the sum of the discrepancy values among all 23 rules. In Fig. 3, the mean non-weighted and the mean weighted discrepancy distributions of the two categories of faces (i.e., Byzantine and human) are depicted. Figure 3. Mean non-weighted discrepancy (left), mean weighted discrepancy (right). Comparing the two diagrams it is evident that nonweighted discrepancy distributions are perplexed. In contrast, it is clear that employing weighted discrepancy, the two distributions are efficiently discriminated. This implies that the trained BSSM shows increased specificity to Byzantine faces. Hence, it is suitable for enforcing the appropriate shape constraints during the shape restoration process.

2) Texture Restoration In order to assess the texture restoration process, a training set containing 101 non-damaged Byzantine icons was used. The test set consists of 12 previously unseen Byzantine icons which have been categorized into the following six groups with respect to the face that is depicted on them: i) Virgin Mary, ii) Jesus iii) Female iv) Old Male, v) Middle-aged Male and vi) Young Male. Each of the icons in the test set was artificially occluded on the following eight predefined regions: chin, left cheek, right cheek, left eye, right eye, moustache, mouth and nose. For restoring the icons, a texture PCA model was trained using the Byzantine icons training set in conjunction with the recursive PCA technique. The restored icons have been compared with several types of potential plausible restoration solutions, which include the replacement of the intensities in the occluded region with: the mean RGB values calculated from the training set, random RGB values of uniform distribution and random RGB values of a Gaussian distribution. In the case of Gaussian distribution, the mean value and standard deviation of each pixel distribution have been calculated from the corresponding pixel values of the training samples. Fig. 4 shows typical samples of the test cases considered. VII. INTEGRATED APPLICATION The methods for damage detection and restoration described in the previous sections have been incorporated into an integrated application intending to assist both amateur and professional restorers in their work. A screenshot of the interface is illustrated in Fig. 6. The final application deals in an integrated way with the landmark location, damage detection, shape and texture restoration modules. Examples, where all the presented functionalities of the tool have been applied are illustrated in Fig. 7. Figure 6. Application interface. Figure 4. Typical test samples of the texture validation experiment, where the left eye is restored. From left to right: occluded, uniform, Gaussian, mean, recursive PCA restored. For each resulting restored icon, we calculated the average difference between the restored pixels and the original non-damaged ones. This value constitutes an error that indicates the similarity between the restored icon and the ground truth. The results of this experiment are illustrated in Fig. 5. The errors have been calculated separately for each group and for each region using all types of occlusions. From Fig. 5 it is strongly evident that in all cases recursive PCA returns the best results, while mean, Gaussian and uniform follow in this order. This result validates the superiority of recursive PCA against some naïve but reasonable texture restoration approaches. Figure 5. Texture restoration errors for different groups (left) and different regions (right). (a) (b) (c) (d) Figure 7. (a) Damaged icon, (b) Landmarks (red: visible, blue: restored), (c) Damage detection, (d) Restored icon. Apart from the modules described earlier, the tool incorporates a special functionality for dealing with cases that an overall face in icons is occluded. In this case, the user specifies the main features of the missing face (i.e. who is the Saint shown among the group of e.g. Jesus, Virgin Mary, etc.), so that all faces within the training set compatible with the given constraints are located. The facial region from all compatible faces in the training set is extracted and rendered iteratively on the given image so that the user can select the most viable match. During the process both the face orientation and color shadings are optimized so that the best match is obtained. Fig. 8 shows an example of the overall face restoration process. VIII. CONCLUSIONS An integrated tool that can be used for virtual restoration of faces portrayed in Byzantine icons is presented. The proposed method utilizes learned shape and texture constraints associated with Byzantine faces in order to detect and eliminate damages within the facial region of faces in

icons. In the final implementation of the tool both icons with partially and totally damaged facial regions are handled. (a) (b) (c) (d) (e) (f) Figure 8. (a) Raw icon, (b)-(d) Indicative candidates, (e) Selected candidate, (f) Color adjusted selected candidate. An experimental evaluation has proven the efficiency of using a BSSM in Byzantine style preservation during the process of shape restoration. The superiority of recursive PCA technique against a number of other viable solutions has also been validated via a quantitative analysis. Indicative visual examples of the adaptation of the presented tool in truly distorted icons show the potential of the proposed application in real cases. Currently, we attempt to extend the current tool, so that it enables the interactive utilization of both human and machine expertise in order to optimize the icon restoration and conservation process. Along these lines, the system will interactively provide clues and suggestions to the restorers in order to facilitate the task of icon conservation and/or restoration. Also, in the future we plan to stage extended quantitative performance evaluation tests along with usability tests in order to ensure that both the performance of the system is adequate and at the same time the ease and efficiency of using the system is proven. For this purpose, the application will be evaluated by professional Byzantine icon conservators and trained iconographers. Early results of the user evaluation process, which is currently being executed, reveal a positive feedback of the participants towards the usability, functionality and usefulness of the system. In the future, we also plan to adapt and apply the techniques employed to other similar problem domains associated with the virtual restoration of either 2D or 3D cultural heritage artifacts. For example, we are in the process of applying our methods to the problem of restoring fragmented terracotta figurines excavated from archeological sites [17]. ACKNOWLEDGMENT This work was supported by the Cyprus Research Promotion Foundation and the European Union Structural Funds (project ΤΠΕ/ΠΛΗΡΟ/0609(ΒΙΕ)/05). We would also like to thank Dr. D. Demosthenous and Mr. C. Karis for their valuable help. REFERENCES [1] A. Maronidis and A. Lanitis, An automated methodology for assessing the damage on Byzantine icons, M. Ioannides et al. (Eds.): International Conference on Cultural Heritage EuroMed 2012, LNCS 7616, pp. 320-329, 2012. [2] A. Lanitis, G. Stylianou and C. Voutounos, Virtual restoration of faces appearing in Byzantine icons, International Journal of Cultural Heritage, Elsevier, (doi:10.1016/j.culher.2012.01.001), 2012. [3] J.-S. Park, Y. Oh, S. Ahn and S.-W. Lee, Glasses removal from facial image using recursive PCA reconstruction, LNCS 2688 (2003) 369 376. [4] Z. M. Wang and J. H. Tao, Reconstruction of partially occluded face by fast recursive PCA, International conference on computational intelligence and security workshops, Harbin, December 15 19, 2007. [5] A. Colombo, C. Cusano and R. Schettini, Three-dimensional occlusion detection and restoration of partially occluded faces, Journal of Mathematical Imaging and Vision 40 (2011) 105 119. [6] B. W. Hwang and S. W. Lee, Reconstruction of partially damaged face images based on a morphable face model, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (2003) 365 372. [7] M. D. Smet, R. Fransens and L. V. Gool, A generalized EM approach for 3D model based face recognition under occlusions, IEEE Computer Society Conference on, Computer Vision and Pattern Recognition (2006) 1423 1430. [8] A. M. Martinez, Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (2002) 748 763. [9] G. S. Kumar, P. Reddy, M. S. Swamy and S. Gupta, Skin based occlusion detection and face recognition using machine learning techniques, International Journal of Computer Applications, 41, 11-15, 2012. [10] G. Kim, J. K. Suhr, H. G. Jung and J. Kim, Face occlusion detection by using B-spline active contour and skin color information, 11th International Conference on Control, Automation, Robotics and Vision, (ICARCV), 627-632, 2010. [11] I. Venkat, A. T. Khader, K. G. Subramanian and P. De Wilde, Recognizing occluded faces by exploiting psychophysically inspired similarity maps, Pattern Recognition Letters, doi: http://dx.doi.org/10.1016/j.patrec.2012.05.003, 2012. [12] M. Yang and L. Zhang, Gabor feature based sparse representation for face recognition, In ECCV, 448-461, 2010. [13] V. Blanz and T. Vetter, Face recognition based on fitting 3D morphable model, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (2003) 1063 1074. [14] I. C. Vranos, H techniki tis agiographias, P. S. Pournaras (In Greek), Thessaloniki, 2001. [15] A. Lanitis, Person identification from heavily occluded face images, Proceedings of the 2004 ACM symposium on Applied computing, 5-9, 2004. [16] G. J. Edwards, A. Lanitis, C. J. Taylor and T. F. Cootes, Statistical face models: Improving specificity, Image and Vision Computing, 16 (3), 203-211, 1998. [17] G. Papantoniou, F. Loizides, A. Lanitis, and D. Michaelides, Digitization, restoration and visualization of Terracotta figurines from the House of Orpheus, Nea Paphos, Cyprus, M. Ioannides et al. (Eds.): International Conference on Cultural Heritage EuroMed 2012, LNCS 7616, pp. 543 550, 2012.