Computer Generated Integral Imaging (II) System Using Depth-Camera

Similar documents
Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

A Binarization Algorithm specialized on Document Images and Photos

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

An efficient method to build panoramic image mosaics

Image Fusion With a Dental Panoramic X-ray Image and Face Image Acquired With a KINECT

Real-time Motion Capture System Using One Video Camera Based on Color and Edge Distribution

TN348: Openlab Module - Colocalization

Some Tutorial about the Project. Computer Graphics

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

3D vector computer graphics

An Image Fusion Approach Based on Segmentation Region

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Dynamic wetting property investigation of AFM tips in micro/nanoscale

3D Virtual Eyeglass Frames Modeling from Multiple Camera Image Data Based on the GFFD Deformation Method

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Enhanced Watermarking Technique for Color Images using Visual Cryptography

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A high precision collaborative vision measurement of gear chamfering profile

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

Classifier Selection Based on Data Complexity Measures *

Optical quasi-three-dimensional correlation

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b

Cluster Analysis of Electrical Behavior

REFRACTION. a. To study the refraction of light from plane surfaces. b. To determine the index of refraction for Acrylic and Water.

Detection of an Object by using Principal Component Analysis

High Payload Reversible Data Hiding Scheme Using Difference Segmentation and Histogram Shifting

Dependence of the Color Rendering Index on the Luminance of Light Sources and Munsell Samples

Simulation Based Analysis of FAST TCP using OMNET++

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images

Edge Detection in Noisy Images Using the Support Vector Machines

Shape-adaptive DCT and Its Application in Region-based Image Coding

PROJECTIVE RECONSTRUCTION OF BUILDING SHAPE FROM SILHOUETTE IMAGES ACQUIRED FROM UNCALIBRATED CAMERAS

Mathematics 256 a course in differential equations for engineering students

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Fuzzy C-Means Initialized by Fixed Threshold Clustering for Improving Image Retrieval

A Range Image Refinement Technique for Multi-view 3D Model Reconstruction

Evaluation of an Enhanced Scheme for High-level Nested Network Mobility

A Clustering Algorithm for Key Frame Extraction Based on Density Peak

Structure from Motion

Techniques of self-interference incoherent digital holography

Palmprint Feature Extraction Using 2-D Gabor Filters

The Shortest Path of Touring Lines given in the Plane

Load Balancing for Hex-Cell Interconnection Network

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Related-Mode Attacks on CTR Encryption Mode

Research and Application of Fingerprint Recognition Based on MATLAB

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

New dynamic zoom calibration technique for a stereo-vision based multi-view 3D modeling system

Panorama Mosaic Optimization for Mobile Camera Systems

COMPLEX WAVELET TRANSFORM-BASED COLOR INDEXING FOR CONTENT-BASED IMAGE RETRIEVAL

Quick error verification of portable coordinate measuring arm

Six-Band HDTV Camera System for Color Reproduction Based on Spectral Information

PRÉSENTATIONS DE PROJETS

Feature-based image registration using the shape context

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Pictures at an Exhibition

A Secured Method for Image Steganography Based On Pixel Values

An Image Compression Algorithm based on Wavelet Transform and LZW

An Efficient Garbage Collection for Flash Memory-Based Virtual Memory Systems

The Research of Ellipse Parameter Fitting Algorithm of Ultrasonic Imaging Logging in the Casing Hole

Efficiency of grid representation and its algorithms for areal 3D scan data

Computational ghost imaging using a fieldprogrammable

Straight Line Detection Based on Particle Swarm Optimization

ESTIMATION OF INTERIOR ORIENTATION AND ECCENTRICITY PARAMETERS OF A HYBRID IMAGING AND LASER SCANNING SENSOR

Lecture #15 Lecture Notes

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

A Computer Vision System for Automated Container Code Recognition

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Lecture 5: Multilayer Perceptrons

Invariant Shape Object Recognition Using B-Spline, Cardinal Spline, and Genetic Algorithm

VIDEO COMPLETION USING HIERARCHICAL MOTION ESTIMATION AND COLOR COMPENSATION

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

The Codesign Challenge

Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

Local Quaternary Patterns and Feature Local Quaternary Patterns

A Gradient Difference based Technique for Video Text Detection

APPLICATION OF A COMPUTATIONALLY EFFICIENT GEOSTATISTICAL APPROACH TO CHARACTERIZING VARIABLY SPACED WATER-TABLE DATA

Life Tables (Times) Summary. Sample StatFolio: lifetable times.sgp

Module Management Tool in Software Development Organizations

A Gradient Difference based Technique for Video Text Detection

Enhanced AMBTC for Image Compression using Block Classification and Interpolation

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

3D Face Reconstruction With Local Feature Refinement. Abstract

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Fingerprint Image Quality Fuzzy System

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1

PCA Based Gait Segmentation

A fast algorithm for color image segmentation

Fast Feature Value Searching for Face Detection

Face Recognition Based on SVM and 2DPCA

Gender Classification using Interlaced Derivative Patterns

Transcription:

Computer Generated Integral Imagng (II) System Usng Depth-Camera Md. Sharful Islam 1, Md. Tarquzzaman 2 1,2 Department of Informaton and Communcaton Engneerng, Islamc Unversty, Kushta-7003, Bangladesh Abstract: Ths paper presents a computer generated ntegral magng (CGII) system that makes better the vsual qualty of three-dmensonal (3D) reconstructed mages based on real depth of 3D object. Two dmensonal (2D) mages are used as an object mage n conventonal CGII system, therefore all pxels of an ntegrated mage just located n the same depth plane. Ths problem has been reduced by extractng each pxel s own depth data from a real 3D object s surface usng depth camera to generate ntegral mages. It s evaluated the proposed CGII method through comparng wth the ntegral mage generated n CGII conventonal method. Expermental results ndcate that the ntegrated mages have full parallax snce all pxels locate n ts own depth plane. So, t dsplayed more natural 3D mage. Keywords: Integral Imagng, Computer generated ntegral magng. 1. INTRODUCTION The present era s three-dmensonal (3D) era but all dsplays are presentng two-dmensonal (2D) mages. 3D dsplays are needed for so many mportant applcatons, namely: a remote control, a vrtual surgery, a 3D desgn and so on because of ths magng provdes extra nformaton that s helpful to human. Some 3D technologes have been developed, for example, stereoscopy, auto stereoscopy, holography, and ntegral magng (II). Among the aforementoned technques, II [1], [2] has been one of the attractve autostereoscopc 3D dsplay technques. The II technque does not requre any extra vewng ad and has contnuous vewponts wthn the vewng angle [3], [4] that are consdered the strkng features of ths technque. It also provdes full parallax, contnuous vewng angle, full color and real-tme dsplayng [5]. In general, an ntegral magng system comprses of two parts; pckup and reconstructon. In the pckup part, the rays comng from 3D object through a lens-array s recorded as elemental mages representng dfferent perspectves of a 3D object. In the reconstructon part, the recorded elemental mages are dsplayed on a dsplay panel and the 3D mage can be reconstructed and observed through a lens-array [6], [7]. In ths confguraton, the specfcatons of lens arrays used n the pckup and dsplay parts should be same. Recently computer generated elemental mage has been dscussed by Sngsong Jung et al. [8]. Instead of pckup process n conventonal II that recordng elemental mage by optcal devce, the elemental mages of magnary objects are computer-generated. In recent works of CGII, one of the major problems was the plane depth of reconstructed mage due to object mages for generatng elemental mage usually contans only color value, and all pxels depth data were gven same depth data by user. Here, the x and y values of elemental mages can be calculated from gven values whch satsfes for all pxels of an object mage. However, every pxel of a reconstructed mage locates n same depth plane [9-13]. Therefore t s not enough for feelng parallax because there s no depth dsparty between pxels whch locate n same object mage. Whle usng more than two object mages wth dfferent depth values respectvely, t can be feel the parallax only between plane mages as shown n Fg. 1. In fact, the real 3D mage depth not only n a depth plane. Page 256

Fg.1. Conventonal generatng elemental mage from object mage To overcome the drawbacks we proposed a new method to generate elemental mages based on all pxels depth data of real-3d object. We extracted 3D nformaton of object by depth-camera and calculated ts depth map that nclude pxels own depth data. Fg. 2 shows comparson of object mage pxel poston n conventonal and proposed method as shown Fg. 2(b) depth data z = 20mm s not calculated from actual depth data that s gven arbtrarly by user. If we generate elemental mage use the pxels set, all of the ntegrated mage pxels wll just locate at one depth plane. The Fg. 2(a) shows pxels poston n proposed method that each pxel s depth s calculated from real depth by depth camera. (a) (b) Fg.2. (a) Proposed method of obtan object pxel (b) Conventonal method of obtan object pxel Elemental mages would be generated by proposed method and then we ntegrated them nto 3D mages usng a lens array. Fnally, we can observe a more natural 3D mage that every pxel reconstructed on ts own depth plane. 2. CGII BASED ON EACH PIXE S REA DEPTH DATA The man purposed of CGII s to compute elemental mage pxel coordnates. Elemental mage pxels were formed by pxels of object focusng through lens-array. In conventonal CGII, the object mage pxels are all on the same depth plane. It leads to all pxels ntegrated mage locate on the same depth plane. To overcome ths problem, a new CGII method has been proposed n our system. Table1. Shows the archtecture of proposed generatng elemental mage from real-3d object. Page 257

Table: 1. Elemental mage generaton of the proposed archtecture Input Calculaton Dsplay Parameter Input Object Image Input Informaton of ens- array Object Image -sze of elemental lens -Obtaned colour and depth mage of the 3D -Number of elemental lens object -Focal length of elemental lens -Extract regon of nterest Informaton of Dsplay -Convert pxels real depth data -Pxel ptch of CD -Arrangng pxel coordnates of object mage -Gap between lens and dsplay Calculaton of pxel coordnates Mappng object pxel to elemental mage pxel -Coordnates of object pxel -Pass through lens array centre -ocate at elemental mage pxel plane -Arrangng elemental mage pxel Dsplay elemental mage pxel on elemental mage plane The proposed method conssts of three parts. In the nput stage the parameters of lens-array and dsplay were nputted, furthermore nformaton of 3D object should be also nputted such as pxel s color, depth and ndex. In calculaton stage we mapped object pxels to elemental mage plane. Fnally, n dsplay stage the calculated elemental mage pxels set wll dsplay on CD. 2.1 Informaton of nput for proposed CGII computaton: In the nput stage, the parameters of lens-array and dsplay are nputted. It ncludes the sze, number and focal length of vrtual elemental lens, pxel ptch of dsplay and the gap between dsplay and lens-array. Furthermore t s necessary to arrange pxels of 3D object. Fg. 3 shows the color mage of object and ts depth map mage. Fg. 3(a) s the object mage and Fg. 3(b) s ts depth map. (a) Fg.3. Color mages of object and ther depth map mages Color mage (b) Its depth map (c) Regon of nterest (d) Its depth map In order to extract the regon of nterest n depth map we have separated the background and people. It s shown n Fg. 3(c) and Fg. 3(d). The depth data of every pxel on the 3D object surface has been calculated. In order to arrange pxel s coordnates of object mage n x, y, z axs we need map the depth data to each correspondng pxel. It s notceable from Fg. 4 object pxels located n the depth from 842.6mm to 1123.9mm snce every pxel s depth data s the real dstance between the pont of object surface and the depth camera. However, n our ntegral magng system the depth range s Page 258

from 39.09mm to 43.9mm. In order to locate pxels of object mage n the lmted depth range we have to convert the real depth data. From lens law central depth cd s gven by Eq.1 cd j fg....(1) f g cd(max( rd) mn( rd))...(2) rd( j, ) 2 Fg.4. Real depth data of from 3D object The, js the converted dstance from (, j)-th pxel s to lens array. 2.2 Calculaton and generatng elemental mage by proposed object mage: In the calculaton stage we make three buffers generally every pxel s depth and color data stored n frst buffer second for center of elemental lenses and thrd for the calculated elemental mage pxels set. The coordnates of elemental lens are computed based on ptch sze and ndex of the elemental lenses. In the followng, the coordnates of pxels n elemental mage plane are calculated by coordnates of object pxels and center of lenses array. Fg.5 shows the geometry of mappng pxels of object to elemental mage plane. The pxel at (x, y, z ) postoned after passed through center of lenses and then locate on elemental mage plane and the coordnates of pxel n elemental mage plane (u, v ) s gven by Eq. 3-5. Fg.5. Geometry for mappng elemental mage P I fg. PD....(3) ( g f ) g Page 259

g u P. ( PI P. )....(4), j g v P. j ( j. PI P. j )....(5), j Where, j are pxels ndex of object mage n x, y axs,, j the ndex of lens center n x, y axs, P the sze of lens and P D the pxel sze of dsplay, f the focal length of the lens, g the gap between the lens-array and elemental mage plane. As a result, the set of elemental mage ponts can be plotted. From the above mentoned equatons we can see every pxel s poston n elemental mage plane are calculated by ther own depth data, j and the calculaton process s performed repeatedly untl the set of elemental mages for all pxels of an magnary object are determned. Fg.6 shows the elemental mage set that generated by proposed method. Fg.6. Elemental mage generated by proposed method 3. EXPERIMENT We showed the expermental results by usng our proposed method. For justfyng the advantages of the proposed method we compared pxels parallax of proposed CGII wth conventonal CGII. Fg.7 shows the confguraton of proposed system. It conssts of a depth camera, RGB camera, hgh-resoluton dsplay and a lens-array. In our experment we use Knect sensor as the depth-camera and RGB camera. It s shown the specfcaton of expermental setup by Table.2. Fg.7. Confguraton of proposed system Page 260

Table: 2. Specfcaton of the expermental setup Set up Specfcatons Characterstcs ens array Focal length Number of elemental lens Ptch of elemental lens 8mm 30(H) 30(V) 5mm Depth Camera Model XBOX 360 KINECT sensor Object Transverse dmenson 320pxels 240pxles 10mm Dsplay Ptch of pxel Number of elemental lens Gap Between lens array and dsplay 10mm 0.1245mm 0.1245mm 3840 2400 The 3D object s demonstrated by proposed CGII and the conventonal CGII respectvely and then observed from fve unlke vewng postons. As shown n Fg. 8 the horzontal dstance x and vertcal dstance y between marked pxels are dfferent snce all pxels generated based on ts real depth so pxels of ntegrated mage located varety depth plane n dfferent vewng postons we can observe parallax mage. It can be verfed that proposed method trumphantly creates pxels dsparty. However, n Fg. 9 we just observed that the mage pxels shfted all together but the value of x, y ddn t change. It ndcates that the ntegral mage generated by conventonal method doesn t have pxels dsparty because all pxels depth data were same. Through the comparson of Fg 8 and Fg 9 we can confrm that the ntegral mage generated by proposed method dsplays more natural 3D mage. In expermental result there are some dstortons. It s because of the rudmentary lenses are smple sphercal lenses whch have sgnfcant aberraton. If mprove the qualty of lens array t can be avoded. Furthermore we evaluate the lateral resoluton of ntegrated mage n proposed and conventonal CGII. There are two types of resoluton of the ntegrated mage, namely: ateral resoluton and longtudnal resoluton. The lateral resoluton whch sgnfy the resoluton of the ntegrated mage at a gven mage plane and the Fg.8. Expermental result of proposed method at fve postons Fg.9. Expermental result of conventonal method at fve postons Page 261

longtudnal (depth) resoluton whch s related to the number of mage planes nto the depth drecton. If we label the lateral resoluton R as the spatal resoluton of the -th pxel at depth plane z = z, f can be obtaned by calculatng the cutoff frequency of the modulaton transfer functon (MTF) of the lens. For ncoherent llumnaton the MTF s expressed as follows: f f MTF 1 sn c P 1 R err( )...(6) P P Where 1 1 1 err( )...(7) Z g f The formula of lateral resoluton R of ntegral mage was derved n. Sngsong Jung et al. [8]. R 1...(8) gp gp PD Here, s the dstance between -th pxel of ntegrated mage and lens array. The Eq.3 shows the relaton between the pxel s lateral resoluton and ts depth data. We have calculated all pxels lateral resoluton and drew ts dstrbuton n Fg. 10. Fg.10. ateral resoluton dstrbuton of ntegrated mage pxels It s shown n Fg.10 the pxel ntegrated by proposed method has varous values. However, conventonal lateral resoluton of all pxels s same snce pxels n dfferent depth has dfferent lateral resoluton. we can confrm agan that the ntegral mage generated by proposed method dsplayed more natural 3D mage. 4. CONCUSION In ths paper, we proposed a dfferent CGII technque to mprove the vewng qualty of 3D reconstructed mages. We extracted the actual depth data from all pxels of real 3D object surface to generate more natural 3D mage. We confrmed the advantage through some experments that comparng wth conventonal method. Snce the pxels dsparty, we can observe parallax mage n dfferent vew poston. Expermental results showed that proposed method s effcently mprovng the 3D mage vewng qualty. If we mprove the speed of calculaton then real-tme ntegral magng dsplay can be mplemented and mght be applcable to real-tme 3D dsplay system. ACKNOWEDGEMENTS Ths research work s supported by Informaton and Communcaton Technology Dvson, Government of the People s Republc of Bangladesh [56.00.0000.028.33.007.15.14-312]. Page 262

REFERENCES [1] ppmann, a Photography Integral, Comptes-Rendus Academy des Scences 146, 446 (1908). [2] A. Stern, and B. Javd, Three dmensonal Sensng, Vsualzaton, and Processng usng Integral Imagng, Proceedngs of IEEE Journal, specal ssue on 3-D technologes for magng and dsplay, 94, 591 607 (2006). [3] B. ee, J.-H. Park, and S.-W. Mn, Three-dmensonal dsplay and nformaton processng based on ntegral magng, n Dgtal Holography and Three-Dmensonal Dsplay, T.-C. Poona, eds. (Sprnger, New York, USA, 2006). [4] J.-H. Park, G. Baasantseren, N. Km, G. Park, J.-M. Kang, and B. ee, Vew mage generaton n perspectve and orthographc projecton geometry based on ntegral magng, Opt. Express 16(12), 8800 8813 (2008). [5] F. Okano, H. Hoshno, J. Ara, and I. Yuyama, Real-tme pckup method for a three-dmensonal mage based on ntegral photography, Appl. Opt. 36(7), 1598 1603 (1997). [6] Dong-Hak Shn and Hoon Yoo Image qualty enhancement n 3D computatonal ntegral magng by use of nterpolaton methods, Optc Express 15(19),12039 12049(2007). [7] Jae-Hyeung Park, Three-dmensonal dsplay scheme based on ntegral magng wth three-dmensonal nformaton processng, Optcs Express, 2004 [8] Sngsong Jung, Sung-Wook Mn, Jea-Hyeung Park and Byoungho ee Study of Three-dmensonal Dsplay System Based on Computer-generated Integral Photography., Optcal Socety of Korea. 5(2),43-48(2001). [9] Ganbat Bassantseren, Jea-Hyeung Park, K- Chul Kwon, and Nam Km Vew angle enhanced ntegral magng dsplay usng two elemental mage masks OPTICS EXPRESS Vol. 17, No. 16, 14405 3( 2009). [10] [10] Jae-Hyeng Park, Joohwan Km, Yunhee Km and Byoungho ee Resoluton-enhanced three-dmenson/ twodmenson vonvertble dsplay based on ntegral magng [11] Jae- Hyun Jung, Keehoon Hong, Gbea Park, Indeok Chung, Jea- Hyeung Park, and Byongho ee Reconstructon of three-dmensonal occluded object usng optcal flow and trangular mesh reconstructon n ntegral magng., Opt Express18(25), 26373-26387(2010) [12] J.-H. Park, K. Hong, and B. ee, Recent progress n three-dmensonal nformaton processng based on ntegral magng, Appl. Opt. 48(34), H77 H94 (2009). [13] K-chul Kwon, Chan Park, Munkh- Uchral Erdenebat, Hgh speed mage space parallel processng for computergenerated ntegral magng system Appl. Opt. ExpressVol. 20, No.2, 732-740(2012). Author s Profle: Md. Sharful Islam receved the B.Sc. and M.Sc. degrees from the Department of Appled Physcs, Electroncs and Communcaton Engneerng (Erstwhle Electroncs & Appled Physcs) from Islamc Unversty, Kushta-7003, Bangladesh, n 2001 and 2003 respectvely. He joned as a lecturer n the Department of Informaton and Communcaton Engneerng, IU, Kushta-7003, Bangladesh, n September 2004 and currently, he s workng as an Assocate Professor n the same department. Hs research nterests are n the area of Integral Imagng three-dmensonal (3D) dsplay technology, Image Processng. Md. Tarquzzaman receved the B.Sc. and M.Sc. degrees from the Department of Appled Physcs, Electroncs and Communcaton Engneerng (Erstwhle Electroncs & Appled Physcs) from Islamc Unversty, Kushta-7003, Bangladesh, n 2001 and 2003 respectvely. He receved the Ph.D. degree n Engneerng n 2010 from Chonnam Natonal Unversty, Republc of Korea. From 2010 to 2011 he worked as a postdoctoral fellow n the School of Electroncs & Computer Engneerng, Chonnam Natonal Unversty and Mokpo Natonal Unversty, Republc of Korea. He joned as a lecturer n the Department of Informaton and Communcaton Engneerng, IU, Kushta-7003, Bangladesh, n September 2004 and currently, he s workng as an Assocate Professor n the same department. Hs research nterests are n the area of audo-vsual speech processng, bometrcs, computer vson and machne learnng. Page 263