Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System

Similar documents
Enhanced Techniques 3D Integral Images Video Computer Generated

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS

Depth extraction from unidirectional integral image using a modified multi-baseline technique

WATERMARKING FOR LIGHT FIELD RENDERING 1

Natural Viewing 3D Display

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular 3D Displays

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD.

Image Base Rendering: An Introduction

A SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision

Stereo Image Rectification for Simple Panoramic Image Generation

Previously... contour or image rendering in 2D

Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System

6.837 Introduction to Computer Graphics Quiz 2 Thursday November 20, :40-4pm One hand-written sheet of notes allowed

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

Image Prediction Based on Kalman Filtering for Interactive Simulation Environments

CMPSCI 670: Computer Vision! Image formation. University of Massachusetts, Amherst September 8, 2014 Instructor: Subhransu Maji

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

5LSH0 Advanced Topics Video & Analysis

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University

Perspective projection. A. Mantegna, Martyrdom of St. Christopher, c. 1450

Multimedia Technology CHAPTER 4. Video and Animation

Three-dimensional integral imaging for orthoscopic real image reconstruction

Miniaturized Camera Systems for Microfactories

View Generation for Free Viewpoint Video System

Advanced Graphics. Path Tracing and Photon Mapping Part 2. Path Tracing and Photon Mapping

Reduction of Blocking artifacts in Compressed Medical Images

I N T R O D U C T I O N T O C O M P U T E R G R A P H I C S

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Stereo pairs from linear morphing

Auto-focusing Technique in a Projector-Camera System

Sung-Eui Yoon ( 윤성의 )

A Qualitative Analysis of 3D Display Technology

Computer Graphics Fundamentals. Jon Macey

Ray Tracing Basics I. Computer Graphics as Virtual Photography. camera (captures light) real scene. photo. Photographic print. Photography: processing

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

INFOGR Computer Graphics

Jingyi Yu CISC 849. Department of Computer and Information Science

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Further Reduced Resolution Depth Coding for Stereoscopic 3D Video

Reconstruction PSNR [db]

A Statistical Consistency Check for the Space Carving Algorithm.

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Ray Tracing. Foley & Van Dam, Chapters 15 and 16

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Topics and things to know about them:

Ray Tracing Foley & Van Dam, Chapters 15 and 16

Photorealistic 3D Rendering for VW in Mobile Devices

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Image-based modeling (IBM) and image-based rendering (IBR)

Multi-View Omni-Directional Imaging

Photorealism: Ray Tracing

Crosstalk in multiview 3-D images

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

CODING OF 3D HOLOSCOPIC IMAGE BY USING SPATIAL CORRELATION OF RENDERED VIEW IMAGES. Deyang Liu, Ping An, Chao Yang, Ran Ma, Liquan Shen

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

Real-time Integral Photography Holographic Pyramid using a Game Engine

Temporal Resolution. Flicker fusion threshold The frequency at which an intermittent light stimulus appears to be completely steady to the observer

Flexible Calibration of a Portable Structured Light System through Surface Plane

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Computer Graphics 7: Viewing in 3-D

Generation of Digital Watermarked Anaglyph 3D Image Using DWT

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

CS 465 Program 5: Ray II

F..\ Compression of IP Images for Autostereoscopic 3D Imaging Applications

Multiple View Geometry

Computational Photography

Efficient Image-Based Methods for Rendering Soft Shadows. Hard vs. Soft Shadows. IBR good for soft shadows. Shadow maps

Vision Review: Image Formation. Course web page:

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

Understanding Variability

Scanline Rendering 1 1/42

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Speeding up your game

3D live Immerse Video Audio interactive 2nd Newsletter. October Project Information. Consortium. Contents. Contact. Page 1.

Chapter 7 - Light, Materials, Appearance

Volumetric Scene Reconstruction from Multiple Views

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Compression of 3-Dimensional Medical Image Data Using Part 2 of JPEG 2000

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

EE795: Computer Vision and Intelligent Systems

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Artistic Rendering of Function-based Shape Models

Modeling Light. On Simulating the Visual Experience

A Study of Medical Image Analysis System

Projective Geometry and Camera Models

Local Image Registration: An Adaptive Filtering Framework

Holojackets. Semester Update: Fall 2017 December 4, 2017

Transcription:

Journal of Image and Graphics, Volume 2, No.2, December 2014 Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System M. G. Eljdid Computer Sciences Department, Faculty of Information Technology, Tripoli University, Libya Email: meljdid@hotmail.com A. Aggoun Computer Science and Technology Department, University of Bedfordshire, United Kingdom Email: amar.aggoun@beds.ac.uk. O. H. Youssef Faculty of Computer Sciences, October University for Modern Sciences and Arts, Cairo, Egypt Email: ohassan@msa.eun.eg. Both objective and subjective micro-image qualities are assessed where a comparison between a fully ray traced still 3D integral image and one generated using the proposed method is carried out. The method uses spatial coherence within the micro-images in an integral image. Information from one micro-image is used to produce the adjacent micro-image on the right without the need to fully ray trace the new micro-image. Abstract The main purpose of this paper is to introduce 3D integral imaging interpolation method, to overcome the 3D missing information problem occurred between cylindrical lenses (micro-images) due to occluded areas in the previous cylindrical lens, new cylindrical lens shows an area, to generate one single photo-realistic 3D integral imaging frame. The method is based on a Multiprocessor ray-tracer containing 3D integral imaging parser, 3D integral camera model, a 3D integral imaging renderer, spatial coherence and 3D scene transformations. Consequently, an increase in speed in the generation of still 3D integral image is achieved compared to fully ray tracing time of missing pixels. Experiments show that a significant reduction in execution time is achieved by using the 3D integral interpolation process. Savings up to (57%-66%) in the ray tracing time when compared to fully ray tracing the missing pixels depends on the complexity of the scene. II. COMPUTER GENERATED 3D INTEGRAL IMAGES Index Terms computer graphics, 3D Integral Images Generation, 3D Interpolation, spatial coherence, Enhancement geometric, 3DTV I. INTRODUCTION The method is introduced in order to overcome missing information (visual artefacts) problem, encountered with the generation of photo-realistic still 3D integral image. Consequently, reducing the execution time required for the generation of still 3D integral images. An interpolation algorithm is adopted in order to compute the 3D missing information using information from the neighboring cylindrical lenses micro images each side of the current micro-image. This will avoid the need to ray-trace all of the missed pixels in new micro-image and that has noticeably resulted in the reduction of the number of computations and hence in acceleration of execution time. On the other hand, the 3D integral imaging quality is slightly affected. Figure 1. Lenticular sheet model in integral ray tracer. Integral imaging is attracting a lot of attention in recent year and has been regarded as strong candidate for next generation 3D TV [1]-[7]. Computer generation of integral imaging has been reported in several literatures [8]-[11]. A computer generated synthetic 3D Integral image is presented as a two dimensional distribution of intensities termed a lenslet-encoded spatial distribution (LeSD), which is ordered directly by the parameters of a decoding array of micro lenses used to replay the threedimensional synthetic image. When viewed, the image exhibits continuous parallax within a viewing zone dictated by the field angle of the array of micro-lenses. The replayed image is a volumetric optical model, which Manuscript received July 4, 2014; revised November 5, 2014. 2014 Engineering and Technology doi: 10.12720/joig.2.2.117-122 117

Journal of Image and Graphics, Volume 2, No.2, December 2014 exists in space at a location independent of the viewing position. This occurs because, unlike stereoscopic techniques, which present planar perspective views to the viewer s eyes, each point within the volume of a 3D Integral image is generated by the intersection of ray pencils projected by the individual micro-lenses. Due to the nature of the recording process of 3D Integral imaging, many changes to the camera model used in standard computer generation software are carried out. To generate a unidirectional 3D Integral image using a lenticular sheet, each lens acts like a cylindrical camera. A strip of pixels is associated with each lens forming a micro-image. Each cylindrical lens records a microimage of the scene from a different angle as shown in the Fig. 1. For micro-lens arrays each lens acts like a square or a hexagonal camera depending on the structure of the lenses, as shown in Fig. 2. In the lateral cross section of the lenticular or the micro-lenses, a pinhole model is used. In the case of lenticular sheets, the pinhole forms a straight line parallel to the axis of the cylindrical lens in the vertical direction. For each pixel, a primary ray is spawned. The recording path of the primary ray draws a straight line going forward towards the image plane and backward away from the image plane. Similar primary rays of neighbouring lenses are spawned to similar directions parallel to each other. is the pinhole approximation, where each micro-lens acts like a separate camera. The result is a set multiple cameras. Each of them records a micro-image of the virtual scene from a different angle see Fig. 3 and Table I. Primary rays pass through the centre of the micro-lens and the image plane. The scene image straddles the micro-lens array. Therefore there are two recording directions, in front and behind the micro-lens array. The specific characteristics of 3D integral imaging, allows us to deal with each cylindrical lens separate from the others, and to measure the number of pixels behind each lens, focal length and the image width. All these parameters including the number of lenslets in the virtual cylindrical array are selected on the basis of the characteristics of the display device. TABLE I. 3D UNIDIRECTIONAL CAMERA PARAMETERS. Parameters Lenticular sheet Resolution 512 512 [pixel] Lens Pitch 2.116667 [mm] Lens Pixels 8 [pixel] Aperture Distance 10.0 [mm] Number of lenses 64 [lens] Focal length 6.8 [mm] Ray Depth 2 [integer] Figure 2. Micro-lens array in integral ray tracing. Therefore highly correlated micro-images are produced which, is a property of 3D integral imaging. The pixels intensity values of the micro-image for each lenslet are read, saved, and then mapped to pixels locations on the screen so that all the vertical slots are displayed at the same time forming the 3D integral image. The location of the vertical elemental image on the computer screen is identical to the location of the corresponding lenslet in the virtual lenses array. III. 3D MISSING PIXLES (BLANK REGIONS) Figure 4. 3D Integral images Display Using LCD panel with a Lenticular sheet [1], [2]. Figure 3. Camera model in 3D integral images for computer graphics. The structure of the lenses and the camera model in the in 3D Integral computer graphics affects the way primary rays are spawned as well as the spatial coherence among them. The camera model used for each micro-lens Pixels with no points mapped onto them are called 3D missed pixels. They appear as holes or blank regions in the cylindrical lens and its status is (-1) which means no point is projected onto this pixel. 3D missed pixels are the result of: (1) Occluded areas in the previous cylindrical lens that should appear in the new cylindrical lens which 118

Journal of Image and Graphics, Volume 2, No.2, December 2014 both used to generate the current cylindrical lens. The even micro-images are generated by ray tracing the image points onto the pixel image [3], [6], [7]. are missing. This is illustrated in Figs. 4, and 5. (2) The new cylindrical lens shows an area that has not been covered by the previous cylindrical lens [3], [6]. In example of this occurrence is shown in Fig. 4, which shows the upper left corner of Fig. 5. To overcome this problem the missed pixels are interpolated instead of ray tracing them in order to generate their intersection points and to produce their colour. Figure 6. Missed information: (a) occluded areas coloured black. (b): area is not covered in the previous micro-image, is coloured black. Figure 5. Occluded areas in the previous micro-image [4]. After interpolated missed pixels, the new generated points are stored in the point image. The associated colours are recorded onto the pixel image and their status flag is set to 0. IV. 3D INTEGRAL IMAGE INTERPOLATION RENDRING The main purpose of 3D integral imaging interpolation algorithm is to avoid ray tracing the missed pixels and to use interpolation algorithm in order to produce 3D missing information and their colour by generating two point arrays. In this process the micro-images are split into odd and even micro-images. The odd micro-images define the micro-images occupying the odd position in frame i.e. first, third, fifth etc micro-images. While the even micro-images define the micro-images occupying the even positions in the frame i.e. second, fourth, sixth etc micro-images. The odd micro-images are generated using the integral imaging lens view algorithm, including fully ray-tracing the missing pixels. The even microimages are also generated using the integral imaging lens view algorithm, except the missing pixels are obtained through interpolation of the information from neighbouring odd micro-images. To generate one even micro-image, two point arrays are used to store the point image from the neighbouring odd micro-images. The colour components (R, G, B) of the missing pixels in even micro-images are obtained by taking the average of point image colour components (R, G, B) of the point image arrays corresponding the neighbouring odd micro-images. The associated generated new point image colours are recorded onto the point image array. This is illustrated in Fig. 6 and Fig. 7. The points generated through interpolation are stored in the point image. The associated colours are recorded onto the pixel image and their status is set to 0. This is illustrated in Fig. 7 and Fig. 8. The recursive operation is then continued where the image points of the previous cylindrical lens and the following cylindrical lens are Figure 7. Interpolation process (a): fully ray traced cylindrical lens. (b): 3D interpolated the missed pixels cylindrical lens. (c): fully ray tracing the missing pixels. The missed pixels are obtained by using the 3D integral interpolation algorithm. Whereas the odd microimages lenses are generated by ray tracing the image points onto the pixel image and the missed pixels are produced by ray tracing. Figure 8. Interpolated missed pixels. V. EXPERMITAL AND RESULT A reliable quality measure is a much-needed tool for determining the type and amount of image distortion. The method most commonly used for comparing the similarity of two images is to compute the PSNR [12], [13]. The first 3D integral image f (i, j ) is generated by using fully integral ray tracing algorithm, and the second 3D integral image g (i, j ) is obtained by using 3D 119

Journal of Image and Graphics, Volume 2, No.2, December, 2014 integral interpolation algorithm. Where RMSE is the root mean square error and it is defined as: RMSE M N 1 M N 2 f ( i, g( i, (1) i0 j0 From Table II, it can be seen that a significant reduction in execution time is achieved by using the 3D integral interpolation algorithm. Savings up to (57% - 66%) in the ray tracing time for different scenes is achieved by using the interpolation process when compared to fully ray tracing the missing pixels. More significant saving in terms of execution time is achieved when compared to the integral imaging lens view algorithm. This made the generation of a single still 3D integral image three times fast than its original execution speed before applying the 3D integral imaging interpolation algorithm. TABLE II. COMPARISONS OF TIMINGS OF 3D INTEGRAL IMAGING FRAME FOR DIFFERENT RENDERING ALGORITHMS. SCENE Total time for rendering the frame using fully integral ray tracing algorithm (in seconds) Total saved time after using 3D integral imaging interpolation algorithm (in seconds) & (%) Teapot 3.000 1.750 sec 58 % Room 2.000 1.232 sec 61 % Small-balls 2.000 1.236 sec 61 % Primitives 2.000 1.201 sec 60 % Tree 14.000 9.791 sec 66 % Gears 5.000 2.875 sec 57 % The approach adapted saved up to (9.791) seconds on the computation of Tree scene. Where f ( i, and g ( i, are the original image that has been fully ray traced, the interpolated image that has been fully interpolated respectively, and the approximated version respectively. M x N is the dimension of the image. The average root mean square error (RMSE) for Red, Green and Blue colours is defined as: RMSERED RMSEGREEN RMSEBLUE AverageRMSE 3 PSNR PSNR 255 AverageRMSE 2 10log10 AVR 1 1 N i0 (2) (3) PSNRperlens (4) i N The performance of the proposed system was measured in terms of the interpolation achieved (expressed in bits per pixel) and the quality of the fully ray traced image (relative to the original). The PSNR was used as a measure of image quality, its value decreasing with increasing distortion [12]-[14]. TABLE III. 3D INTEGRAL IMAGEs QUALITY MEASUREMENT OF TEAPOT Scene Teapot Cylindrical lens 30 Cylindrical lens 37 RMSE Red RMSE Green RMSE Blue RMSE Average PSNR (dbs) 14.3402 11.0637 15.3345 13.5795 36.8020 10.7068 7.6300 11.3042 9.8803 38.1831 PSNR is the peak signal to noise average ratio of AVR several micro-images that have been measured. Table III shows the results that have been obtained using 3D integral image quality measurements for teapot scene. Similar results were obtained for other 3D integral images. The average PSNR is given as: 301.3942 PSNRAVR 37.674275 dbs (5) 8 Generally a PSNR above 30 dbs is considered acceptable in assessment of digital image quality. Furthermore from the subjective assessment no degradation in image quality was observed. The scenes used are shown in Fig. 10 to Fig. 12: The resolution of the frame is set to 512 512 pixels Fig. 11, and the resolution per cylindrical lens is set to 512 8 pixels Fig. 9. From the Table II, it can be seen no significant savings in the execution time is achieved using interpolation process for Teapot, Room, Small-balls, and Primitives scenes, due to the simplest of the scenes. But there is a respectful reduction in execution time is complex scenes such as Tree, and Gears scenes. Figure 9. Fully Ray-tracing cylindrical lenses of different tested scenes. VI. COMPOSITION OF CYLINDERICAL LENSES Basically, a photo-realistic 3D integral images frame consists of several cylindrical lenses. The final phase of the algorithm is to combine those micro-images in still 3D integral image frame Fig. 10 and Fig. 11. The 3D integral Multiprocessor ray tracer software is adapted in order to generate a two-dimensional array as an image buffer in order to save the micro images. The image resolution of the cylindrical lens is set up to 8 512 pixels per cylindrical lens. The resolution of still 3D integral image frame is set up to 512 512 pixels per frame and 64 lens per frame. 120

Journal of Image and Graphics, Volume 2, No.2, December 2014 VII. CONCLUSION This paper provides the processes of strategies are developed to overcome 3D missing information problem based on spatial coherence among cylindrical lenses, with vast improvement in the execution time. A trade off between execution time and image quality has to be considered and it noticed that the improvements in the execution time are achieved without any visible degradation in the 3D integral images quality, composition of cylindrical lenses to generate still 3D integral images frame is achieved. REFERENCES Figure 10. Micro-images of different tested scenes generated after interpolation process. [1] [2] [3] [4] [5] [6] [7] [8] Figure 11. Composed interpolated frame of 3D integral image teapot scene. [9] [10] [11] [12] [13] [14] Figure 12. Tested scenes before converted to 3D integral images. Figure 13. Interpolated still 3D integral image frames of different scenes tested after converted and composed to 3D integral images. 121 M. G. Eljdid, A. Aggoun, and O. H. Youssef, Computer generated content for 3D TV, in Proc. 3DTV Conference, 2007. Contract No: IST-7-248420-STREP, Program FP7-ICT-2009-4. 1st Newsletter June 2010. Intermediate Dissemination Report. (20th April 2012). [Online]. Available: http://www.3dvivant.eu/ M. G. Eljdid, A. Aggoun, and O. H. Youssef, Enhanced techniques 3D integral images video computer generated, in Proc. the International Conference on Computing Technology and Information Management, Dubai, UAE, 2014. O. H. Youssef and A. Aggoun, Coherent grouping of pixels for faster shadow cache, in Proc. 3rd Holoscopic Computer Graphics, 3DTV Conference, Tampere, Finland, 8-9 June 2010. A. Aggoun, E. Tsekleves, D. Zarpalas, P. Daras, A. Dimou, L. Soares, and P. Nunes, Immersive 3D holoscopic system, IEEE Multimedia Magazine, Special Issue on 3D Imaging Techniques and Multimedia Applications, vol. 20, no. 1, pp. 28-37, Jan-Mar 2013. O. H. Youssef, Acceleration techniques for photo-realistic computer generated integral images, Ph.D Thesis, De Montfort University, 2004. M. G. Eljdid, 3D content computer generation for volumetric displays, Ph.D Thesis, Brunel University West London, 2007. S. W. Min, et al, Three-dimensional display system based on computer-generated integral photography, in Stereoscopic Display and Virtual Reality Systems VIII, A. J. Woods, M. T. Bolas, J. O. Merritt, and S. A. Benton, ed. vol. 4297, SPIE, 2001, pp. 187-195. T. Naemura, et al., 3-D computer graphics based on the integral photography, Optics Express, vol. 8, pp. 255-262, 2001. J. Ren, A. Aggoun, and M. McCormick, Computer generation of integral 3D images with maximum effective viewing angle, Journal of Electronic Imaging, vol. 14, 2005. O. Youssef, A. Aggoun, W. Wolf, and M. McCormick, Pixels grouping and shadow cache for faster integral 3d ray-tracing, in Proc. Electronic Imaging, SPIE, 2002, vol. 4660. R. C. Gonzalez and R. E. Woods, Digital Imaging Processing, 2nd ed., New Jersey: Prentice Hall, 2002. A. M. Ford, et al., Effect of image compression on image quality, J. Photo. Science, vol. 43, no. 2, pp. 52-57, 1995. S. M. Seitz and C. R. Dyer, View morphing, in Proc. IEEE Workshop on Representation of Visual Scenes, 1995. Mahmoud G. Eljdid is a lecturer in information and communication technologies at Tripoli University, Libya. His research interests include computer generation and live capture of 3D integral images using Multiprocessor ray tracing system, 3D integral images content, 3D medical visualization, 3D video coding and animation, computer vision systems, and real-time digital image/video processing, 3D software plug-in tools, acceleration techniques for computer generation 3D integral images (real-time), 3D integral images games, analysis and design of engineering information systems. Eljdid has a PhD in electronic and computer engineering from Brunel University, West London, UK.

Journal of Image and Graphics, Volume 2, No.2, December 2014 of Nottingham, UK. Amar Aggoun is a professor in information and communication technologies at Bedfordshire University, UK. His research interests include light-field imaging systems, computer generation and live capture of 3D integral images, depth measurement and volumetric data reconstruction, 3D medical visualization, 3D video coding, computer vision systems, and real-time digital image/video processing. Aggoun has a PhD in electronic engineering from the University Osama H. Youssef is a lecturer in information and communication technologies at modern sciences and arts University, Egypt. His research interests include computer generation and live capture of 3D integral images, depth measurement and volumetric data reconstruction, 3D video coding, computer vision systems, and real-time digital image/video processing. 3D integral images content, computer vision systems, and real-time digital image/video processing, 3D software plug-in tools, acceleration techniques for computer generation 3D integral images (real-time). Youssef has a Ph.D in computing sciences and engineering from De Montfort University, UK. 122