Crosstalk in multiview 3-D images

Size: px
Start display at page:

Download "Crosstalk in multiview 3-D images"

Transcription

1 Invited Paper Crosstalk in multiview 3-D images * Jung-Young Son, 1 Beom-Ryeol Lee, 2 Min-Chul Park, and 2 Thibault Leportier Dept. of Biomedical Engineering, Konyang University, Nonsan, Chungnam, , Korea 1 Next generation visual computing research Section, ETRI, Daejeon, , Korea 2 Sensor Systems Research center, Korea Institute of Science and Technology, Seoul, , Korea * jyson@konyang.ac.kr Abstract Crosstalk in the contact-type multiview 3-D images is not an effective parameter of defining the quality of 3-D images. This is because the viewing zone in the contact-type multiview 3-D displays allows viewing the images which are composed of an image piece from each view image in a predefined set of consecutive view images, except the part along the viewing zone cross section. However, this part cannot guarantee to view individual view images separately because the viewing region of each view image is contacted to its neighboring viewing regions through a point for each neighbor due to its diamond like shape. Furthermore, the size of each view region can be smaller than the viewers pupil sizes as the pixel size decreases and/or the number of view images increases as in super-multiview imaging. The crosstalk has no meaning. Keywords: Crosstalk, contact-type multiview 3-D image, viewing region, image pieces, patched image 1. Introduction In the 3-D image, the crosstalk is defined as the interference between neighboring view images [1]. The term has been used to quantify the quality of 3-D images but it is not appropriate for the contact-type multiview 3-D displays [2] because the viewing zones of this type displays are divided into a number of diamond shaped viewing regions, which is much more than that of the multiview images displayed on the display panel and each region provides an image different from those in other regions. The crosstalk is based on the assumption that different view images are separately viewed for viewers in the proper positions of the viewing zone. However, many factors including imperfectness in system and component parameters and characteristics, misalignment viewers posture and so on, force the complete separation impossible. The crosstalk in the stereoscopic images is caused by a small intensity portion of an eye image added to the other eye image. This added portion will get into the other eye simultaneously, and makes the other eye image blurred and the depth sense reduced. In the stereoscopic imaging systems, two images in a stereoscopic image pair cannot be completely separated from each other due to the imperfect optics used to deliver each view image to its corresponding eye and the viewers postures do not allow the optics to work at their full capacities. Hence there is always a small portion of an eye image in the other eye image. This means that crosstalk exists always in the stereoscopic image. The issue is how to reduce the crosstalk to the level at which viewers do not aware of it by making the intensity of the corresponding eye image much higher than that of the other eye image. In the contact-type multiview 3-D displays, the crosstalk between different view images can also present because the different view images displayed on the panel appear only at the viewing regions along the viewing Three-Dimensional Imaging, Visualization, and Display 2015, edited by Bahram Javidi, Jung-Young Son, Proc. of SPIE Vol. 9495, 94950P 2015 SPIE CCC code: X/15/$18 doi: / Proc. of SPIE Vol P-1

2 zone cross section (VZCS) [3]. Since the numbers of different view images and the regions are the same, each view image appears at its corresponding viewing region. In the other than these viewing regions, the image at each viewing region is a mixture of the images from its surrounding viewing regions [4]. Since each viewing region which has a diamond shape is aligned along the VZCS by sharing its side tips with its neighbors, an intensity of a viewing region can smear into its neighbors, its neighbors neighbors and so on. Hence crosstalk is not between two closest neighbor viewing regions but all viewing regions. However, since the viewing regions are contacted only at their side tips, it is difficult to tell that the crosstalk is caused only by the interference between neighboring view images. Hence it is necessary to redefine the crosstalk in the contact-type multiview 3-D displays. For the redefining, it is necessary to understand the viewing zone structure of the displays because it has a unique structure which is different from other 3-D displays based on projections. The viewing zone is a light field formed by lights from each pixel in the display panel. These lights propagate along a specific direction defined by the elemental lenses in front of the panel and expand continuously. The expanding angle is defined by the focal length of the elemental lenses and the pixel size. The plate where the elemental lenses are inscribed is named as viewing zone forming optics (VZFO). In the displays, the propagation directions of the lights from pixels consisting of a view image are designed to converge to, i.e., crossed each other at a point on the parallel plane to the panel. Since the lights are expanding, the converged lights have a certain area. This area is geometrically determined by the relationship between the pixel cell and VZFO parameters, and works as a common viewing area of the pixels. The area is the viewing region of the view image represented by the pixels. The parallel plane is the only plane where all view images are separately appearing. After passing this plane, the light from each pixel is separately propagating. The plane is named as viewing zone cross section (VZCS). The distance from the panel to VZCS is typically defined as the viewing distance. The crosstalk can be induced because the lights from pixels which are consisting of a view image do not confined completely to the area in VZCS. Hence lights from different view images can be interfere each other to induce the crosstalk. The separately propagating lights will be mixed with lights from pixels of other view images. More different view images will be mixed as the distance from the VZCS increases. This mixing induces the image perceived by a viewer located at a place other than VZCS to be composed of image strips from different view images. The perceived image is a patched image of image pieces from different view images. The crosstalk will be meaningful if the viewing regions are confined along the VZCS. However, if these patched images can also give depth sense, the viewing zone of contact-type multiview 3-D images will be extended more from the VZCS and the crosstalk will lose its meaning in quantifying the quality of 3-D images. In fact, the crosstalk will have no meaning if a supermultiview condition in the 3-D images is met because at least two different view images will be projected to the pupil of viewer s each eye in the super-multiview image. In this paper, the viewing zone structure in contact-type multiview 3-D displays is analyzed to show the inappropriateness of using the crosstalk to quantify the quality of 3-D images. For this purpose, the compositions of images to be projected to viewer s eyes in the viewing zone are analyzed. Proc. of SPIE Vol P-2

3 2. Viewing Zones in a contact-type 3-D imaging system Since the viewing zone is a light field formed by lights from pixels in display panel, it will be easily specified in the optical geometry formed by a display panel and a VZFO [5]. The optical geometry of the contact-type multiview 3-D imaging systems has two different configurations: One is parallel and the other radial [6]. These configurations are induced by the difference in the dimensions of pixel cell (elemental image) in the panel and elemental lens in VZFO. When no difference, it is parallel and when the dimension of the elemental lens is smaller than the pixel cell, it is radial. The dimension difference between the elemental lens and the pixel cell is usually less than 1/100 of the dimension, but the resulted viewing zone shapes of the radial and parallel have a large difference. The viewing zone of the parallel is barely defined to VZCS in that of the radial. Figs. 1 and 2 show the viewing zone forming geometry of radial and parallel configurations, respectively. P C Display Panel Line/Point Image Viewing Zone (36 Regions) /2 6/3 4/2 2/1 1/1 3/3 4/4 6/6 1/2 4/5 2/4 1/4 2/5 3/6 1/5 2/6 1/6 Expanding Image Expanding Pixel Image Viewing Zone Cross Section VZFO Fig. 1. Viewing zone forming geometry of a radial-type multiview 3-D Display Proc. of SPIE Vol P-3

4 Line/Point Image Array Line/Point Image Display Panel Expanding Image Field of View 7/2 Viewing Region Viewing Zone 26 Sub-Regions Fig. 2. Viewing zone forming geometry of a parallel-type multiview 3-D Display These figures are drawn with the assumption that each pixel cell ( P C in Fig. 1) image is crossed at the center of the elemental lens assigned to it and expands with the same angle as the crossing angle to form the viewing zone. The line and point images in these figures represent the centers of their corresponding elemental lenses. The numbers of pixel cells and pixels in each pixel cell are given as 10 and 6 for Fig. 1, and 16 and 8 for Fig. 2, respectively. The number of pixels in each view represents the number of different view images and the number of pixel cells the number of pixels composing each view image perceived at the viewing zone [4]. Hence there are 6 different view images with 10 pixels for each view for Fig. 1 and 8 different view images with 16 pixels for each view for Fig. 2. The 6 diamond shape areas along VZCS are viewing regions for the view images specified by the number in each viewing region. Since the viewing zone is formed in the space where the ever expanding images of left and right most pixel cells are crossed and these images are equally expanded, it has a shape of a diamond for the radial because they are completely crossed but a circular cone for the parallel because they are partly crossed. VZCS is defined as the cross sections of two images when they are completely matched. Hence it becomes a parallel plane to the panel. In the parallel configuration, since the expanding image of each pixel cell is propagating in parallel, VZCS is theoretically at infinite distance from the panel, i.e., it doesn t exist. As shown in Figs 1 and 2, not even the viewing regions along VZCS, other part of the viewing zone is also divided into many diamond shape regions. Each of these regions is formed by crossing between the images of pixels in left and right most pixels cells. The boundary lines represent the boundary lines between pixels in each pixel cell. This contains several meanings: 1) The number of the divided regions in viewing zone 2 is represented by m, where m is the number of pixels in a pixel cell. This informs that the number of viewing regions in the parallel configuration is mm+ ( 1)/2. However, 2m 1viewing regions along VZCS will be partly appeared. Hence there are 36 viewing regions for Fig. 1 and 36 for Fig. 2. 2) These divided regions can be identified by the pixels forming them. For example, when the pixels in each pixel cell is Proc. of SPIE Vol P-4

5 numbered by 1 to m from right to left, the regions in Fig. 1 is identified as 1/2, 1/5, 1/6, 4/2, 2/5, 5/2 and so on, where the first and second numbers in each number set indicate the numbered pixels in left and right most pixel cells, respectively. And 3) the images viewed at these regions have the first and last pixels from first and second numbers, respectively, of number sets identifying their regions. This is obvious that the regions are defined by the two pixel images. Furthermore, all these regions are divided into sub-regions because they are also crossed partly or whole by pixel images from pixel cells other than the left and right most pixel cells 1. For the viewing regions along the VZCS, a numbered pixel from each pixel is completely crossed with the same numbered pixels from other pixel cells. Hence it is possible to find the image composition at each sub-region by identifying the pixels in each pixel cell involved in forming the sub-region. In Fig. 2, the magnified view of region 7/2 is shown and Fig. 3 shows the magnified view of the viewing zone in the radial. This figure clearly indicates that each viewing region is divided into many sub-regions. 5/2 4/2 R -1 R 0 R 1 R 2 1/1 2/2 3/3 4/4 5/5 6/6 VZCS Viewing zone R 3 R 4 1/5 2/5 R 5 1/6 Normal line of the display panel Fig. 3. Magnified view of the viewing zone in the radial-type in Fig Image compositions Fig. 4 shows the individual views of the sub-regions 1/5, 1/6, 4/2, 2/5, 5/2 specified by the number sets in Figs. 1 and 3. The region 7/2 in the parallel contains 26 sub-regions, regions 1/5 and 1/6 12, regions 4/2 17, and regions 2/5 and 5/2 13. Fig. 4 also shows that regions 1/5 and 1/6 have the same dividing structure, and that the regions 2/5 and 5/2 have the same structure. Proc. of SPIE Vol P-5

6 4/ /5 1 5/2 AND 2/ / Fig. 4. Sub-regions in several viewing regions Tables 1, 2 and 3 show the compositions of the images viewed at these sub-regions. Each of the compositions is consisted of 10 pixels, i.e., a pixel from each pixel cell. There are 10 pixel cells in the panel. In table 1, the compositions of region 4/2 consist of pixels from 3 different view images of 2, 3 and 4, since each pixel cell consists of the same numbered pixels from different view images and they are aligned in the pixel cell as the camera order in the 180 rotated multiview camera array for the different view images. Hence 2, 3 and 4 in the compositions represent pixels from view images 2, 3 and 4, respectively. In these compositions, the consecutively appearing same numbers represent an image strip from the numbered view image. This means that the images in these sub-regions are composed of 3 image pieces, 1 st piece from view image 4, 2 nd piece from view image 3 and the 3 rd piece from view image 2. For example, the image composition of sub-region 7, indicates that its 1 st, 2 nd and 3 rd pixels from left are coming from the same order pixels in view image 4, 4 th to 7 th pixels from the same order pixels in view image 3 and 8 th to 10 th pixels the same order pixels in view image 2. The relative positions of the image pieces in the compositions are the same as their positions in their original view images. In the region 4/2, the total number of numbers 2, 3 and 4 in 17 compositions is 42, 86 and 42, respectively. These numbers imply that region 4/2 is dominated by view image 3, i.e., the view image between 2 and 4. It is also seen in Fig. 3 that regions 3/1, 5/3 and 6/4 are also consisted of 17 viewing subregions. The image compositions in these regions are consisted of 1, 2 and 3 for region 3/1, 3, 4 and 5 for region 5/3 and 4, 5 and 6 for region 6/4. Hence the image compositions will be the same as the region 4/2 if the numbers 4, 3 and 2 are replaced by 3(5, 6), 2(4, 5) and 1(3, 4) for region 3/1(5/3, 6/4). The total number of numbers in these regions will be the same as the region 4/2. Table 2 is for the image compositions in viewing regions 5/2 and 2/5. The compositions for 2/5 are in the parenthesis. These compositions are composed of 4 image pieces each from view images 2, 3, 4 and 5. The relative positions of the image pieces in each composition are the same as their relative positions in view images 2, 3, 4 and 5. Table 2 shows that the number order of the compositions in region 5/2 and 2/5 are completely reversed at their corresponding sub-regions. Subregions 1, 7 and 13, the numbers in region 5/2 is reversed to those in region 2/5; for example Proc. of SPIE Vol P-6

7 ( (7), (13)) in 5/2 and ( (7), (13)) in 2/5 are completely reversed. The sub-regions 2 and 5, 3 and 6, 4 and 10, 8 and 11, and 9 and 12, the number orders are reversed to each other. This informs that if the image compositions of a region i( = 1 to m)/ j( = 1 to m) are known, those of the region j / i are found by reversing the number orders of the compositions. The total numbers of 5, 4, 3, and 2 in the compositions are 26, 39, 39 and 26, respectively. The numbers are more distributed than those in region 4/2. The image compositions in Table 3 are for regions 5/1 and 1/6. The compositions in parenthesis are for region 1/6. For region 5/1, the compositions consist of 5 different numbers, 1, 2, 3, 4 and 5, i.e., 5 different image pieces view images 1 to 5. For the regions 1/6, the compositions consist of 6 different numbers of 1, 2, 3, 4, 5, and 6. Each of 6 view images contributes an image piece to form the composition. The total numbers of numbers 1, 2, 3, 4 and 5 are 18, 28, 28, 28 and 18, respectively. For region 1/6, the numbers of number 1 to 6 are 19, 20, 21, 21, 20 and 19, respectively. These numbers indicate that as the viewing region involves with more view images, the dominance of an image in the compositions is diminished. In fact for region 5/1, view images 2, 3, and 4 contribute mostly but equally. For region 1/6, the contributions from 6 view images are almost the same. Hence tables 1, 2 and 3 will allows estimating the compositions of images for all viewing sub-regions, except viewing regions between the viewing regions along the VZCS. The image compositions in the viewing regions between the viewing regions along VZCS are divided into 9 viewing sub-regions as shown in Fig. 3. The number is 1 less than the number of pixel cells along the horizontal direction of the multiview display. The image compositions of the sub-regions in region 1/2 are , , , and from left to right sub-regions. The numbers of number 2 increase 1 at a time from right to left. For the region 2/1, the compositions have reversed number order of those in region 1/2. The total numbers of 1 and 2 in these compositions are 45 and 45. They are equal. So it is possible to consider that the viewing regions between the viewing regions along VZCS, such as 1/2(2/1), 2/3(3/2),, 5/6(6/5) are for images composed of two neighboring view images of 1 and 2(2 and 1), 2 and 3(3 and 2),, 5 and 6(6/5) in 0.5:0.5 ratio, i.e., first half of the view image specified by 1 st number in each number set are combined with 2 nd half of the view images specified by the 2 nd number in each number set. These composed images will not be too different from each view image, except some image distortions along the combined line. The distortions can be minimized by minimizing the disparity between different view images. Proc. of SPIE Vol P-7

8 No. PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC Table 1. Image compositions in sub-regions in viewing region 4/2 No. PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 1 5(2) 4(2) 4(2) 4(3) 3(3) 3(3) 3(4) 2(4) 2(4) 2(5) 2 5(2) 4(2) 4(2) 4(3) 3(3) 3(3) 3(4) 3(4) 2(5) 2(5) 3 5(2) 4(2) 4(2) 4(3) 4(3) 3(4) 3(4) 3(4) 2(5) 2(5) 4 5(2) 4(2) 4(2) 4(3) 4(3) 3(4) 3(4) 3(5) 3(5) 2(5) 5 5(2) 5(2) 4(3) 4(3) 3(3) 3(3) 3(4) 2(4) 2(4) 2(5) 6 5(2) 5(2) 4(3) 4(3) 4(3) 3(4) 3(4) 2(4) 2(4) 2(5) 7 5(2) 5(2) 4(3) 4(3) 4(3) 3(4) 3(4) 3(4) 2(5) 2(5) 8 5(2) 5(2) 4(3) 4(3) 4(3) 3(4) 3(4) 3(5) 3(5) 2(5) 9 5(2) 5(2) 4(3) 4(3) 4(4) 4(4) 3(4) 3(5) 3(5) 2(5) 10 5(2) 5(3) 5(3) 4(3) 4(3) 3(4) 3(4) 2(4) 2(4) 2(5) 11 5(2) 5(3) 5(3) 4(3) 4(3) 3(4) 3(4) 3(4) 2(5) 2(5) 12 5(2) 5(3) 5(3) 4(3) 4(4) 4(4) 3(4) 3(4) 2(5) 2(5) 13 5(2) 5(3) 5(3) 4(3) 4(4) 4(4) 3(4) 3(5) 3(5) 2(5) Table 2. Image compositions in sub-regions in viewing region 5/2(2/5) No. PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 1 1(1) 1(1) 1(2) 2(2) 2(3) 3(3) 3(4) 4(4) 4(5) 5(6) 2 1(1) 1(1) 2(2) 2(2) 2(3) 3(3) 3(4) 4(5) 4(5) 5(6) 3 1(1) 1(1) 2(2) 2(2) 3(3) 3(4) 3(4) 4(5) 4(5) 5(6) 4 1(1) 1(1) 2(2) 2(3) 3(3) 3(4) 4(4) 4(5) 4(5) 5(6) 5 1(1) 2(1) 2(2) 2(2) 3(3) 3(4) 3(4) 4(5) 4(6) 5(6) 6 1(1) 2(1) 2(2) 2(3) 3(3) 3(4) 4(4) 4(5) 4(6) 5(6) 7 1(1) 2(1) 2(2) 3(3) 3(3) 3(4) 4(5) 4(5) 4(6) 5(6) 8 1(1) 2(2) 2(2) 2(3) 3(3) 3(4) 4(4) 4(5) 5(6) 5(6) 9 1(1) 2(2) 2(2) 3(3) 3(3) 3(4) 4(5) 4(5) 5(6) 5(6) 10 1(1) 2(2) 2(2) 3(3) 3(4) 4(4) 4(5) 4(5) 5(6) 5(6) 11 1(1) 2(2) 2(3) 2(3) 3(4) 3(4) 4(5) 4(5) 5(6) 5(6) 12 1(1) 1(2) 2(2) 2(3) 3(3) 3(4) 4(4) 4(5) 5(5) 5(6) Table 3. Image compositions in sub-regions in viewing region 1/5(1/6) The analyses so far inform that the viewing zone can be divided into viewing regions which show individual view images and patched images of two to m neighboring view images. This is shown in Fig. 3. The regions Proc. of SPIE Vol P-8

9 are specified by R 0 to R 6 ( R 6 ). In the lines between R 1 to R 1, the viewing regions for individual view images and two image patching regions are existing. These viewing regions can be hardly distinguished in real viewing situations because they are in side by side and a viewer s one eye can be in a patched image region but the other eye in an individual view image region. This is the reason why the crosstalk is not effective in the multiview 3-D images. Furthermore, when the size of the viewing regions are reduced to less than viewers pupil sizes by implementing many different view images, several viewing regions can be within the pupil as in the super multiview imaging as shown in Fig. 3. In this case, each viewing region will work as an image cell [7]. The different composition images in the sub-regions within the region can be hardly identified because they are too small, the first and last pixels of their images are the same for all the sub-pixel images and all these images have the same number combinations though the number ratio is different for different sub-regions. Hence the composite images in this region will be mixed together and consequently the representing image of the region will be defined by the total numbers of the numbers composing all the images of the sub-regions as mentioned in regarding region 4/2. The number ratio of 2 : 3 : 4 in this region is 42 : 88 : 42. Number 3, i,e, view image number 3 will dominate the region. 4. A new parameter quantify the image quality in the contact-type multiview 3-D images It is good to have a parameter to quantify the images quality in 3-D images. As mentioned before, the crosstalk is not appropriate to represent the quality in the multiview 3-D images. There was an attempt to quantify the image quality by taking inverse of the number of different view images involved in the patched images [8]. This inverse value revels that the quality is reducing as the involved different view images are increase but it does not reflect the dominant image as in the region 4/2(2/4). To account the image, it will be possible to use the total number of pixels for each composing view images. For example, as shown in region 4/2(2/4), the number ratio of 2 : 3 : 4 is 42 : 88 : 42. This number ratio indicates that the quality of the dominant image is 88/(84+88)= of the view image number 3. For the case of region 5/2(2/5), the pixel number ratio of view images 5 : 4 : 3 : 2 = 26 : 39 : 39 : 26. The dominant images in this combination has the quality value 39/130 = 0.3. For the regions along the VZCS, the viewing regions for individual view images will have the quality value 1, and the regions for two patches images 0.5. In this representation, region 4/2(2/4) has better quality than two patched image viewing regions. Conclusions: Crosstalk is inappropriate parameter of quantifying quality of multiview 3-D images. Patched images of different view images appearing at the viewing zone of the images cannot represent their image qualities by the crosstalk. Since the viewing zone will be divided into smaller regions as the number of different view images increases and/or the pixel size becomes smaller, the regions can be small enough to be smaller than the pupil size. This means that there are no boundaries between different view images. This situation is common for a super-multiview condition. This is another reason of making the crosstalk ineffective in quantifying the quality. It is necessary to find a new parameter for quantifying the quality. Proc. of SPIE Vol P-9

10 Acknowledgements This work was supported by GigaKOREA project, [GK14D0100, Development of Telecommunications Terminal with Digital Holographic Table-top Display], and GK14C0100, Development of Interactive and Realistic Massive Giga-Content Technology] References 1. Lenny Lipton, Foundations of the STEREOSCOPIC CINEMA, A Study in Depth, Van Nostrand Reinhold Company, New York, Jung-Young Son and Bahram Javidi, 3-Dimensional Imaging Systems Based on Multiview Images, IEEE/OSA J. of Display Technology, V1(1), pp , Jung-Young Son, Byung-Gyu Chae, Wook-Ho Son, Jeho Nam and Beom-Ryeol Lee, Comparisons of Viewing Zone Characteristics of MV and IP, IEEE/OSA JDT, V8(8), pp , Beom-Ryeol Lee, Jung-Young Son, Characteristics of composite images in MV and IP, Applied Optics, V51(21), pp , Chun-Hea Lee, Jung-Young Son, Sung-Kyu Kim and Min-Chul Park, Visualization of Viewing Zones formed in a contact type multiview 3-D imaging system, IEEE/OSA JDT, V8(9), pp , Jung-Young Son, Wook-Ho Son, Sung-Kyu Kim, Kwang-Hoon Lee and Bahram Javidi, 3-D imaging for creating real world like environments, Proceedings of THE IEEE (Invited), V101(1), pp , Wook-Ho Son, Jinwoong Kim, Jung-Young Son, Beom-Ryol Lee and Min-Chul Park, The basic image cell in contact-type multiview 3-D Imaging systems, Optical Engineering, 52(10), pp ~ , Oct 10, V. V. Saveljev, Jung-Young Son and Kyung-Hoon Cha, Estimation of Image Quality in Autostereoscopic Display, SPIE V5908, pp (-14), 2005(2005, Annual Meeting) Proc. of SPIE Vol P-10

A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular 3D Displays

A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular 3D Displays A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular D Displays Yun-Gu Lee and Jong Beom Ra Department of Electrical Engineering and Computer Science Korea Advanced

More information

Three-dimensional integral imaging for orthoscopic real image reconstruction

Three-dimensional integral imaging for orthoscopic real image reconstruction Three-dimensional integral imaging for orthoscopic real image reconstruction Jae-Young Jang, Se-Hee Park, Sungdo Cha, and Seung-Ho Shin* Department of Physics, Kangwon National University, 2-71 Republic

More information

Digital holographic display with two-dimensional and threedimensional convertible feature by high speed switchable diffuser

Digital holographic display with two-dimensional and threedimensional convertible feature by high speed switchable diffuser https://doi.org/10.2352/issn.2470-1173.2017.5.sd&a-366 2017, Society for Imaging Science and Technology Digital holographic display with two-dimensional and threedimensional convertible feature by high

More information

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Hyeonah Jeong 1 and Hoon Yoo 2 * 1 Department of Computer Science, SangMyung University, Korea.

More information

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS M.R Swash, A. Aggoun, O. Abdulfatah, B. Li, J. C. Fernández, E. Alazawi and E. Tsekleves School of Engineering and Design, Brunel

More information

3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering

3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering 3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering Young Ju Jeong, Yang Ho Cho, Hyoseok Hwang, Hyun Sung Chang, Dongkyung Nam, and C. -C Jay Kuo; Samsung Advanced

More information

Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System

Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System International Conference on Complex, Intelligent and Software Intensive Systems Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System Nam-Woo Kim, Dong-Hak Shin, Dong-Jin

More information

Pattern Feature Detection for Camera Calibration Using Circular Sample

Pattern Feature Detection for Camera Calibration Using Circular Sample Pattern Feature Detection for Camera Calibration Using Circular Sample Dong-Won Shin and Yo-Sung Ho (&) Gwangju Institute of Science and Technology (GIST), 13 Cheomdan-gwagiro, Buk-gu, Gwangju 500-71,

More information

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement Joobong Hyun, Dong-Choon Hwang, Dong-Ha Shin, Byung-Goo Lee, and Eun-Soo Kim In this paper,

More information

Perceptual Quality Improvement of Stereoscopic Images

Perceptual Quality Improvement of Stereoscopic Images Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:

More information

Real-time Integral Photography Holographic Pyramid using a Game Engine

Real-time Integral Photography Holographic Pyramid using a Game Engine Real-time Integral Photography Holographic Pyramid using a Game Engine Shohei Anraku, Toshiaki Yamanouchi and Kazuhisa Yanaka Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa-ken,

More information

Aberrations in Holography

Aberrations in Holography Aberrations in Holography D Padiyar, J Padiyar 1070 Commerce St suite A, San Marcos, CA 92078 dinesh@triple-take.com joy@triple-take.com Abstract. The Seidel aberrations are described as they apply to

More information

A SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision

A SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision A SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision Seok-Hoon Kim KAIST, Daejeon, Republic of Korea I. INTRODUCTION Recently, there has been tremendous progress in 3D graphics

More information

Chapter 12 Notes: Optics

Chapter 12 Notes: Optics Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be

More information

Depth-fused display with improved viewing characteristics

Depth-fused display with improved viewing characteristics Depth-fused display with improved viewing characteristics Soon-Gi Park, Jae-Hyun Jung, Youngmo Jeong, and Byoungho Lee* School of Electrical Engineering, Seoul National University, Gwanak-gu Gwanakro 1,

More information

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea Image quality enhancement of computational integral imaging reconstruction for partially occluded objects using binary weighting mask on occlusion areas Joon-Jae Lee, 1 Byung-Gook Lee, 2 and Hoon Yoo 3,

More information

Numerical investigation on the viewing angle of a lenticular three-dimensional display with a triplet lens array

Numerical investigation on the viewing angle of a lenticular three-dimensional display with a triplet lens array Numerical investigation on the viewing angle of a lenticular three-dimensional display with a triplet lens array Hwi Kim,, * Joonku Hahn, and Hee-Jin Choi 3 Department of Electronics and Information Engineering,

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 112 DOI: 10.11159/mhci16.112 Extended Fractional

More information

Temporally Consistence Depth Estimation from Stereo Video Sequences

Temporally Consistence Depth Estimation from Stereo Video Sequences Temporally Consistence Depth Estimation from Stereo Video Sequences Ji-Hun Mun and Yo-Sung Ho (&) School of Information and Communications, Gwangju Institute of Science and Technology (GIST), 123 Cheomdangwagi-ro,

More information

COHERENCE AND INTERFERENCE

COHERENCE AND INTERFERENCE COHERENCE AND INTERFERENCE - An interference experiment makes use of coherent waves. The phase shift (Δφ tot ) between the two coherent waves that interfere at any point of screen (where one observes the

More information

Multiple Color and ToF Camera System for 3D Contents Generation

Multiple Color and ToF Camera System for 3D Contents Generation IEIE Transactions on Smart Processing and Computing, vol. 6, no. 3, June 2017 https://doi.org/10.5573/ieiespc.2017.6.3.175 175 IEIE Transactions on Smart Processing and Computing Multiple Color and ToF

More information

Reprint. from the Journal. of the SID

Reprint. from the Journal. of the SID A 23-in. full-panel-resolution autostereoscopic LCD with a novel directional backlight system Akinori Hayashi (SID Member) Tomohiro Kometani Akira Sakai (SID Member) Hiroshi Ito Abstract An autostereoscopic

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Invited Paper. Nukui-Kitamachi, Koganei, Tokyo, , Japan ABSTRACT 1. INTRODUCTION

Invited Paper. Nukui-Kitamachi, Koganei, Tokyo, , Japan ABSTRACT 1. INTRODUCTION Invited Paper Wavefront printing technique with overlapping approach toward high definition holographic image reconstruction K. Wakunami* a, R. Oi a, T. Senoh a, H. Sasaki a, Y. Ichihashi a, K. Yamamoto

More information

Fast Response Fresnel Liquid Crystal Lens for 2D/3D Autostereoscopic Display

Fast Response Fresnel Liquid Crystal Lens for 2D/3D Autostereoscopic Display Invited Paper Fast Response Fresnel Liquid Crystal Lens for 2D/3D Autostereoscopic Display Yi-Pai Huang* b, Chih-Wei Chen a, Yi-Ching Huang a a Department of Photonics & Institute of Electro-Optical Engineering,

More information

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD.

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD. Devices displaying 3D image RNDr. Róbert Bohdal, PhD. 1 Types of devices displaying 3D image Stereoscopic Re-imaging Volumetric Autostereoscopic Holograms mounted displays, optical head-worn displays Pseudo

More information

Three dimensional Binocular Holographic Display Using Liquid Crystal Shutter

Three dimensional Binocular Holographic Display Using Liquid Crystal Shutter Journal of the Optical Society of Korea Vol. 15, No. 4, December 211, pp. 345-351 DOI: http://dx.doi.org/1.387/josk.211.15.4.345 Three dimensional Binocular Holographic Display Using iquid Crystal Shutter

More information

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging Keehoon Hong, 1 Jisoo Hong, 1 Jae-Hyun Jung, 1 Jae-Hyeung Park, 2,* and Byoungho

More information

Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display

Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display Jae-Hyun Jung, 1 Jiwoon Yeom, 1 Jisoo Hong, 1 Keehoon Hong, 1 Sung-Wook Min, 2,* and Byoungho

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Applicability Estimation of Mobile Mapping. System for Road Management

Applicability Estimation of Mobile Mapping. System for Road Management Contemporary Engineering Sciences, Vol. 7, 2014, no. 24, 1407-1414 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2014.49173 Applicability Estimation of Mobile Mapping System for Road Management

More information

A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES

A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES A RADIAL WHITE LIGHT INTERFEROMETER FOR MEASUREMENT OF CYLINDRICAL GEOMETRIES Andre R. Sousa 1 ; Armando Albertazzi 2 ; Alex Dal Pont 3 CEFET/SC Federal Center for Technological Education of Sta. Catarina

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Application of isight for Optimal Tip Design of Complex Tool Holder Spindle

Application of isight for Optimal Tip Design of Complex Tool Holder Spindle Application of isight for Optimal Tip Design of Complex Tool Holder Spindle WMSCI 2010 Author: Koo Hong Kwon Won Jee Chung Ki Beom Park School of Mechatronics, Changwon National University Email: goodgoohong@hanmail.net

More information

Part Images Formed by Flat Mirrors. This Chapter. Phys. 281B Geometric Optics. Chapter 2 : Image Formation. Chapter 2: Image Formation

Part Images Formed by Flat Mirrors. This Chapter. Phys. 281B Geometric Optics. Chapter 2 : Image Formation. Chapter 2: Image Formation Phys. 281B Geometric Optics This Chapter 3 Physics Department Yarmouk University 21163 Irbid Jordan 1- Images Formed by Flat Mirrors 2- Images Formed by Spherical Mirrors 3- Images Formed by Refraction

More information

3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging

3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging 3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging Jae-Young Jang 1, Myungjin Cho 2, *, and Eun-Soo Kim 1

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Department of Photonics, NCTU, Hsinchu 300, Taiwan. Applied Electromagnetic Res. Inst., NICT, Koganei, Tokyo, Japan

Department of Photonics, NCTU, Hsinchu 300, Taiwan. Applied Electromagnetic Res. Inst., NICT, Koganei, Tokyo, Japan A Calibrating Method for Projected-Type Auto-Stereoscopic 3D Display System with DDHOE Ping-Yen Chou 1, Ryutaro Oi 2, Koki Wakunami 2, Kenji Yamamoto 2, Yasuyuki Ichihashi 2, Makoto Okui 2, Jackin Boaz

More information

Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array

Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array Color moiré pattern simulation and analysis in three-dimensional integral imaging for finding the moiré-reduced tilted angle of a lens array Yunhee Kim, Gilbae Park, Jae-Hyun Jung, Joohwan Kim, and Byoungho

More information

Digital Holographic Display System with Large Screen Based on Viewing Window Movement for 3D Video Service

Digital Holographic Display System with Large Screen Based on Viewing Window Movement for 3D Video Service Digital Holographic Display System with Large Screen Based on Viewing Window Movement for 3D Video Service Minsik Park, Byung Gyu Chae, Hyun-Eui Kim, Joonku Hahn, Hwi Kim, Cheong Hee Park, Kyungae Moon,

More information

Scanline-based rendering of 2D vector graphics

Scanline-based rendering of 2D vector graphics Scanline-based rendering of 2D vector graphics Sang-Woo Seo 1, Yong-Luo Shen 1,2, Kwan-Young Kim 3, and Hyeong-Cheol Oh 4a) 1 Dept. of Elec. & Info. Eng., Graduate School, Korea Univ., Seoul 136 701, Korea

More information

Measurement of Highly Parabolic Mirror using Computer Generated Hologram

Measurement of Highly Parabolic Mirror using Computer Generated Hologram Measurement of Highly Parabolic Mirror using Computer Generated Hologram Taehee Kim a, James H. Burge b, Yunwoo Lee c a Digital Media R&D Center, SAMSUNG Electronics Co., Ltd., Suwon city, Kyungki-do,

More information

Real-time Multiview Autostereoscopic Image Synthesis System

Real-time Multiview Autostereoscopic Image Synthesis System Real-time Multiview Autostereoscopic Image Synthesis System Yongjun Jon 1, Kyunghan Chun 2, Bonghwan Kim 2, Dongin Lee 3, Insoo Lee 4 and Dongkyun Kim 1* 1 School of Computer Science and Engineering, Kyungpook

More information

Chapter 23. Geometrical Optics: Mirrors and Lenses and other Instruments

Chapter 23. Geometrical Optics: Mirrors and Lenses and other Instruments Chapter 23 Geometrical Optics: Mirrors and Lenses and other Instruments HITT1 A small underwater pool light is 1 m below the surface of a swimming pool. What is the radius of the circle of light on the

More information

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION

LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,

More information

Stereoscopic & Collimated Display Requirements

Stereoscopic & Collimated Display Requirements 01 Stereoscopic & Collimated Display Requirements October 3, 01 Charles J. Lloyd President, Visual Performance LLC Overview Introduction Stereoscopic displays Antialiasing and pixel pitch Collimated displays

More information

Victor S. Grinberg Gregg W. Podnar M. W. Siegel

Victor S. Grinberg Gregg W. Podnar M. W. Siegel Geometry of binocular imaging II : The augmented eye Victor S. Grinberg Gregg W. Podnar M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,

More information

Projection simulator to support design development of spherical immersive display

Projection simulator to support design development of spherical immersive display Projection simulator to support design development of spherical immersive display Wataru Hashimoto, Yasuharu Mizutani, and Satoshi Nishiguchi Faculty of Information Sciences and Technology, Osaka Institute

More information

A Preliminary Study on Daylighting Performance of Light Shelf according to the Depth of Space

A Preliminary Study on Daylighting Performance of Light Shelf according to the Depth of Space , pp.70-74 http://dx.doi.org/10.14257/astl.2013.32.17 A Preliminary Study on Daylighting Performance of Light Shelf according to the Depth of Space Heangwoo Lee 1.1, Janghoo Seo 2.1, Yongseong Kim 2.2,

More information

Natural Viewing 3D Display

Natural Viewing 3D Display We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Spectral Coding of Three-Dimensional Mesh Geometry Information Using Dual Graph

Spectral Coding of Three-Dimensional Mesh Geometry Information Using Dual Graph Spectral Coding of Three-Dimensional Mesh Geometry Information Using Dual Graph Sung-Yeol Kim, Seung-Uk Yoon, and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 1 Oryong-dong, Buk-gu, Gwangju,

More information

Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006

Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006 Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006 Introduction For rotationally symmetric optical components, a convenient optomechanical approach which is usually

More information

Depth Image Based 3D Object Model Generation

Depth Image Based 3D Object Model Generation Depth Image Based 3D Object Model Generation YANG-KEUN AHN, KWANG-SOON CHOI, YOUNG-CHOONG PARK Realistic Media Platform Research Center Korea Electronics Technology Institute 121-835, 8th Floor, #1599,

More information

25 The vibration spiral

25 The vibration spiral 25 The vibration spiral Contents 25.1 The vibration spiral 25.1.1 Zone Plates............................... 10 25.1.2 Circular obstacle and Poisson spot.................. 13 Keywords: Fresnel Diffraction,

More information

3D X-ray Laminography with CMOS Image Sensor Using a Projection Method for Reconstruction of Arbitrary Cross-sectional Images

3D X-ray Laminography with CMOS Image Sensor Using a Projection Method for Reconstruction of Arbitrary Cross-sectional Images Ke Engineering Materials Vols. 270-273 (2004) pp. 192-197 online at http://www.scientific.net (2004) Trans Tech Publications, Switzerland Online available since 2004/08/15 Citation & Copright (to be inserted

More information

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful. Project 4 Results Representation SIFT and HoG are popular and successful. Data Hugely varying results from hard mining. Learning Non-linear classifier usually better. Zachary, Hung-I, Paul, Emanuel Project

More information

Glasses-free randot stereotest

Glasses-free randot stereotest Glasses-free randot stereotest Jonghyun Kim Jong-Young Hong Keehoon Hong Hee Kyung Yang Sang Beom Han Jeong-Min Hwang Byoungho Lee Journal of Biomedical Optics 20(6), 065004 (June 2015) Glasses-free randot

More information

Nicholas J. Giordano. Chapter 24. Geometrical Optics. Marilyn Akins, PhD Broome Community College

Nicholas J. Giordano.   Chapter 24. Geometrical Optics. Marilyn Akins, PhD Broome Community College Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 24 Geometrical Optics Marilyn Akins, PhD Broome Community College Optics The study of light is called optics Some highlights in the history

More information

Interaxial Distance and Convergence Control for Efficient Stereoscopic Shooting using Horizontal Moving 3D Camera Rig

Interaxial Distance and Convergence Control for Efficient Stereoscopic Shooting using Horizontal Moving 3D Camera Rig Interaxial Distance and Convergence Control for Efficient Stereoscopic Shooting using Horizontal Moving 3D Camera Rig Seong-Mo An, Rohit Ramesh, Young-Sook Lee and Wan-Young Chung Abstract The proper assessment

More information

Digital Image Stabilization and Its Integration with Video Encoder

Digital Image Stabilization and Its Integration with Video Encoder Digital Image Stabilization and Its Integration with Video Encoder Yu-Chun Peng, Hung-An Chang, Homer H. Chen Graduate Institute of Communication Engineering National Taiwan University Taipei, Taiwan {b889189,

More information

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important. Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is

More information

Null test for a highly paraboloidal mirror

Null test for a highly paraboloidal mirror Null test for a highly paraboloidal mirror Taehee Kim, James H. Burge, Yunwoo Lee, and Sungsik Kim A circular null computer-generated hologram CGH was used to test a highly paraboloidal mirror diameter,

More information

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation

More information

Principles of Architectural and Environmental Design EARC 2417 Lecture 2 Forms

Principles of Architectural and Environmental Design EARC 2417 Lecture 2 Forms Islamic University-Gaza Faculty of Engineering Architecture Department Principles of Architectural and Environmental Design EARC 2417 Lecture 2 Forms Instructor: Dr. Suheir Ammar 2016 1 FORMS ELEMENTS

More information

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press,   ISSN Hopeld Network for Stereo Correspondence Using Block-Matching Techniques Dimitrios Tzovaras and Michael G. Strintzis Information Processing Laboratory, Electrical and Computer Engineering Department, Aristotle

More information

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship Proc. Natl. Sci. Counc. ROC(A) Vol. 25, No. 5, 2001. pp. 300-308 Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship YIH-SHYANG CHENG, RAY-CHENG CHANG, AND SHIH-YU

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Multi-directional Hole Filling Method for Virtual View Synthesis

Multi-directional Hole Filling Method for Virtual View Synthesis DOI 10.1007/s11265-015-1069-2 Multi-directional Hole Filling Method for Virtual View Synthesis Ji-Hun Mun 1 & Yo-Sung Ho 1 Received: 30 March 2015 /Revised: 14 October 2015 /Accepted: 19 October 2015 #

More information

Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System

Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System Journal of Image and Graphics, Volume 2, No.2, December 2014 Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System M. G. Eljdid Computer Sciences Department, Faculty of

More information

Lecture Outlines Chapter 26

Lecture Outlines Chapter 26 Lecture Outlines Chapter 26 11/18/2013 2 Chapter 26 Geometrical Optics Objectives: After completing this module, you should be able to: Explain and discuss with diagrams, reflection and refraction of light

More information

An Auto-Stereoscopic VRML Viewer for 3D Data on the World Wide Web

An Auto-Stereoscopic VRML Viewer for 3D Data on the World Wide Web An Auto-Stereoscopic VM Viewer for 3D Data on the World Wide Web Shinji Uchiyama, Hiroyuki Yamamoto, and Hideyuki Tamura Mixed eality Systems aboratory Inc. 6-145 Hanasakicho, Nishi-ku, Yokohama 220-0022,

More information

Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images

Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images Hae Yeoun Lee* Wonkyu Park** Heung-Kyu Lee* Tak-gon Kim*** * Dept. of Computer Science, Korea Advanced Institute of Science

More information

4.5 VISIBLE SURFACE DETECTION METHODES

4.5 VISIBLE SURFACE DETECTION METHODES 4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There

More information

Shading of a computer-generated hologram by zone plate modulation

Shading of a computer-generated hologram by zone plate modulation Shading of a computer-generated hologram by zone plate modulation Takayuki Kurihara * and Yasuhiro Takaki Institute of Engineering, Tokyo University of Agriculture and Technology, 2-24-16 Naka-cho, Koganei,Tokyo

More information

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

ratio of the volume under the 2D MTF of a lens to the volume under the 2D MTF of a diffraction limited

ratio of the volume under the 2D MTF of a lens to the volume under the 2D MTF of a diffraction limited SUPPLEMENTARY FIGURES.9 Strehl ratio (a.u.).5 Singlet Doublet 2 Incident angle (degree) 3 Supplementary Figure. Strehl ratio of the singlet and doublet metasurface lenses. Strehl ratio is the ratio of

More information

Adaptive Image Sampling Based on the Human Visual System

Adaptive Image Sampling Based on the Human Visual System Adaptive Image Sampling Based on the Human Visual System Frédérique Robert *, Eric Dinet, Bernard Laget * CPE Lyon - Laboratoire LISA, Villeurbanne Cedex, France Institut d Ingénierie de la Vision, Saint-Etienne

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

GOPRO CAMERAS MATRIX AND DEPTH MAP IN COMPUTER VISION

GOPRO CAMERAS MATRIX AND DEPTH MAP IN COMPUTER VISION Tutors : Mr. Yannick Berthoumieu Mrs. Mireille El Gheche GOPRO CAMERAS MATRIX AND DEPTH MAP IN COMPUTER VISION Delmi Elias Kangou Ngoma Joseph Le Goff Baptiste Naji Mohammed Hamza Maamri Kenza Randriamanga

More information

CS4670: Computer Vision

CS4670: Computer Vision CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday

More information

(and what the numbers mean)

(and what the numbers mean) Using Neutral Density Filters (and what the numbers mean) What are ND filters Neutral grey filters that effectively reduce the amount of light entering the lens. On solid ND filters the light-stopping

More information

Specialized Depth Extraction for Live Soccer Video

Specialized Depth Extraction for Live Soccer Video Specialized Depth Extraction for Live Soccer Video Axon Digital Design Eindhoven, University of Technology November 18, 2010 Introduction Related Work Proposed Approach Results Conclusion Questions Outline

More information

Intermediate view synthesis considering occluded and ambiguously referenced image regions 1. Carnegie Mellon University, Pittsburgh, PA 15213

Intermediate view synthesis considering occluded and ambiguously referenced image regions 1. Carnegie Mellon University, Pittsburgh, PA 15213 1 Intermediate view synthesis considering occluded and ambiguously referenced image regions 1 Jeffrey S. McVeigh *, M. W. Siegel ** and Angel G. Jordan * * Department of Electrical and Computer Engineering

More information

A 3D photographic capsule endoscope system with full field of view

A 3D photographic capsule endoscope system with full field of view A 3D photographic capsule endoscope system with full field of view Mang Ou-Yang 1, Wei-De Jeng *,2, Chien-Cheng Lai 2, Yi-Chinn Kung 1 and Kuan-Heng Tao 1 1 Department of electrical engineering, National

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Augmenting Reality with Projected Interactive Displays

Augmenting Reality with Projected Interactive Displays Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

A Simple Viewfinder for Stereoscopic Video Capture Systems

A Simple Viewfinder for Stereoscopic Video Capture Systems A Simple Viewfinder for Stereoscopic Video Capture Systems Cary Kornfeld Departement Informatik ETH Zürich CH 8092 Zürich, Switzerland Cary.Kornfeld@inf.ethz.ch Abstract The emergence of low cost digital

More information

Resampling radially captured images for perspectively correct stereoscopic display

Resampling radially captured images for perspectively correct stereoscopic display Resampling radially captured images for perspectively correct stereoscopic display N. A. Dodgson University of Cambridge Computer Laboratory, Gates Building, J. J. Thomson Avenue, Cambridge, UK CB3 OFD

More information

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Efficient Stereo Image Rectification Method Using Horizontal Baseline Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,

More information

Supplementary Information

Supplementary Information Supplementary Information Interferometric scattering microscopy with polarization-selective dual detection scheme: Capturing the orientational information of anisotropic nanometric objects Il-Buem Lee

More information

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213 Geometry of binocular imaging III : Wide-Angle and Fish-Eye Lenses Victor S. Grinberg M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,

More information

Dispersion (23.5) Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring / 17

Dispersion (23.5) Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring / 17 Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring 2010 1 / 17 Dispersion (23.5) The speed of light in a material depends on its wavelength White light is a mixture of wavelengths

More information