Free Viewpoint Video Synthesis and Presentation of Sporting Events for Mixed Reality Entertainment
|
|
- Gabriella Powers
- 6 years ago
- Views:
Transcription
1 Free Viewpoint Video Synthesis and Presentation of Sporting Events for Mixed Reality Entertainment Naho Inamoto Hideo Saito Department of Information and Computer Science, Keio University {nahotty, ABSTRACT This paper presents a new framework for arbitrary view synthesis and presentation of sporting events for mixed reality entertainment. In accordance with the viewpoint position of an observer, virtual view image of sporting scene is generated by view interpolation among multiple videos captured at real stadium. Then the synthesized sporting scene is overlaid onto a desktop stadium model in the real world via HMD. This makes it possible to watch the event in front of the observer. Projective geometry between cameras is used for virtual view generation of the dynamic scene and geometric registration between the real world and the virtual view image of sporting scene. The proposed method does not need to calibrate multiple video cameras for capturing the event and the HMD camera. Therefore it can be applied even to dynamic events in a large space and enables observation with immersive impression. The proposed approach leads to make a new type of mixed reality entertainment for sporting events. 1. INTRODUCTION Sporting events are the most popular form of remote live entertainment in the world, attracting millions of viewers on television. Recently computer-generated visualization is increasingly used in sports broadcasting to enhance the viewer experience. One example form is that virtual objects, such as virtual offside lines in soccer scene and virtual lines indicating world records in swimming, are overlaid onto the live video [8]. Another example is virtual replay which allows observation from any viewpoint. This includes reconstruction of sports match using CG animation [8, 9] and arbitrary view generation from multiple camera images using computer vision technology [22, 29, 17]. With the ongoing convergence of television and Internet broadcasting, interactive visualization is becoming more important. However, these visualization techniques, typically using standard television screen or computer screens, cannot give enough immersive impression and interactivity for entertainment. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ACE 04, June 3-5, 2004, Singapore Copyright 2004 ACM /04/ $5.00. On the other hand, Virtual Reality (VR) or Mixed Reality (MR) produces stronger impression to be immersed into the virtual world or virtual and real mixed world. Special devices, such as spherical screen, 3D display, and head mounted display, are often used for visualization. This enables viewers to engage and immerse into the action. If sporting events are presented with high sensation as being present at live event, watching the events can be more enjoyable and can emphasis viewer experience. In this paper, we extend arbitrary view generation technique to the field of Augmented Reality (AR) for immersive visualization of real sporting matches. We have already proposed view-synthesis method targeting dynamic events in a large space, and also developed Viewpoint on Demand System which enables viewers to select their favorite viewpoints while observation through standard GUI [14]. This paper introduces a new AR application based on the above system for mixed reality entertainment. The proposed system allows viewers to watch sporting events at any place in the real world via a head mounted display (HMD) with favorite viewpoint. For example, they can observe a soccer match on the table where a soccer stadium model is placed. Virtual view image of the objective scene is synthesized by view interpolation and overlaid on the stadium model in the real world. In the field of AR, typical approach includes overlaying virtual CG objects onto images sequences captured by video camera [2]. In the proposed approach, instead of CG objects, real images of sporting scene, which are captured by multiple cameras at stadium, are overlaid onto the stadium model based on the concepts of IBR [15]. In conventional methods, virtual objects, which have 3D shapes and positions, are inserted into the 3D space of the real world. On the contrary, the proposed method overlays virtual view images of sporting scene onto the stadium model without 3D information. For the geometric registration, a novel approach using projective geometry between cameras is introduced. When calculating player positions on HMD, using centroide of players instead of foot positions presents player motions more stably than previously proposed method [15]. This paper focuses on an application targeting soccer match observation for remote entertainment. The proposed method, however, can be applied to other sporting events or musical events and so on. 2. RELATED WORKS The methods for synthesizing arbitrary view images from a number of real camera images have been studied since
2 Figure 1: Overview of the proposed method. the 1990s in the field of computer vision [16, 22]. These techniques, called Image Based Rendering (IBR), can be categorized into two groups, Model Based Approach and Transfer Based Approach. Model Based Approach constructs a 3D shape model of an object to generate the desired view [7, 24, 28]. Since the quality of the virtual view image depends on accuracy of the 3D model, a large number of video cameras surrounding the object or range scanners are used for construction of an accurate model. Also strong camera calibration [27], which is carried out to relate 2D coordinates in images to 3D coordinates in object space, is usually required. As 3D positions of several points in the object space must be measured, this calibration becomes difficult especially in a large space. For such reasons, the object area is generally limited within a few cubic meters in this approach. On the other hand, Transfer Based Approach synthesizes arbitrary view images without an explicit 3D model [1, 5, 23]. Instead, image warping such as transfer of correspondence is employed for synthesizing new view images. The dense correspondence between the original images, which is required for view-synthesis, is often obtained manually or by the use of optical-flow, so almost all targets are static images or slightly varying images such as facial expression. Thus we have proposed view-synthesis method targeting dynamic events in a large-space such as a soccer match captured at stadium, which is included in Transfer Based Approach [12, 13]. As for arbitrary view presentation, A related approach to our method has been proposed in [21, 18]. Prince et al. introduce a system for live capture of 3-D content and simultaneous presentation in augmented reality [21]. The user can watch the superimposed images of a remote collaborator in the real world, whose action is captured by 15 cameras surround him/her. The difference from our method is that 3D models of the subjects are reconstructed using shapefrom-silhouette in order to render the appropriate view. The subject is captured in the limited area, which is the volume of 3.3m diameters and 2.5m heights, while our method is targeting a large-scale event at stadium. Koyama et al. have proposed a method for augmented virtuality presentation for soccer match [18]. Players are represented with a simplified 3D model, which are reconstructed from multiple videos. Observers can watch the motion of players with a CG stadium from arbitrary viewpoint positions. While this method calculates 3D positions of players and overlay them on the CG stadium, our proposed method superimposes players into any empty stadium in the real world via HMD. Image-based registration technique enables augmented reality presentation of soccer match without 3D information. One advantage of our method is view synthesis and presentation based on projective geometry between multiple cameras instead of reconstructing 3D models with strong camera calibration. Projective geometry between cameras can easily be obtained by just images, while strong calibration of multiple cameras is difficult to obtain, which is imperative in above two methods [21, 18]. The proposed method is new challenge to image based AR presentation for large-scale dynamic events. 3. OVERVIEW Figure 1 shows overview of the proposed method. A video see-through HMD is used for augmented reality entertainment. An observer sees a desktop stadium model in the real world through the HMD, while images of players and ball of the soccer scene are overlaid on the display. Firstly, a soccer match is taken by uncalibrated multiple cameras at stadium and stored as video images. The projective geometry used for view-synthesis is estimated between neighboring cameras. The proposed system employs fundamental matrices between the viewpoints of the cameras, and homographic matrices for the plane, which forms the soccer ground, between neighboring views. Then the features of captured video images, such as the player positions and correspondence map of players among multi-views, are obtained from time-series images. The above process is executed in
3 Figure 2: Process flowchart for virtual view synthesis. advance. The online process consists of three stages, (1)calculation of the viewpoint position, (2)virtual view synthesis for the soccer scene, and (3)overlay the synthesized scene on the stadium model. At the first stage, the viewpoint position of the observer is determined by the position and pose of the HMD camera. At the second stage, neighboring cameras near the viewpoint position, which are reference cameras, synthesize the virtual viewpoint image of the soccer scene by view interpolation. At the final stage, synthesized soccer scene is overlaid on the desktop stadium model through the HMD. The observer can virtually watch the soccer match from favorite viewpoints in the real world. 4. VIRTUAL VIEW SYNTHESIS In this section, we explain the algorithm of arbitrary view synthesis for the dynamic regions of soccer scene that stand for the events. Figure 2 indicates flowchart of the viewsynthesis algorithm. View interpolation between neighboring cameras near the virtual viewpoint position generates virtual view images of soccer players and ball for each frame. Firstly, all dynamic regions are extracted by subtracting the background image from the whole image of soccer scene. If the background image, which includes neither players nor ball, cannot be captured, it can be made by setting mode value of image sequence to each pixel. After the silhouettes have been extracted by binarization, every silhouette region is segmented with different label. The silhouette of a player between neighboring cameras is corresponded by using homography [11] of the ground plane. This is based on the fact that the feet of the players usually contact with the ground. If a player is occluded by other players, however, the above algorithm may not work well. In this case, segmented silhouettes of the previous frame are used for dividing and corresponding the silhouette regions of the current frame. Foot position of the occluded player in the current frame can be also calculated with homography of the ground plane. Then bounding box(surrounding rectangle for each player) is projected from the previous frame, and it segments overlapping players. In this way, silhouette is corresponded even when players occlude each other. Next, each pair of silhouettes is extracted to obtain the pixel-wise correspondence within the silhouette. Epipolar lines are drawn between neighboring views by using a fundamental matrix [11]. On each epipolar line, the edge points are at first corresponded and the pixels inside the silhouette are then corresponded by linear interpolation of the edge points. After a dense correspondence for the whole silhouette is obtained, the pixel positions and values are transferred from the source images of two reference cameras to the destination image by image morphing [5] as described by the following equations, ṕ = (1 α){(p 1 c 1 )z 1 + c 1 } + α{(p 2 c 2 )z 2 + c 2 } (1) I(ṕ) = (1 α)i(p 1 ) + αi(p 2 ) (2) where p 1, p 2 are the coordinates of the matching points in reference camera 1, 2, and c 1, c 2 are the coordinates of the principal points as well. I(p 1 ), I(p 2 ) are the value of p 1, p 2. ṕ is the interpolated coordinates and I(ṕ) is the interpolated value. α defines the interpolating weights to the reference cameras, and z 1, z 2 are the zoom ratios of the virtual camera to the reference camera 1, 2 respectively. All correspondences are used in the transfer to generate a warped image. Here two transfers are required, one from reference camera 1 and the other from reference camera 2. Two generated warped images are then blended to complete the image of the virtual view. If the color of a pixel is different in two images, the corresponding pixel in the virtual view is rendered with the average of the colors; otherwise the rendered color is taken from either actual image. The above algorithm is applied to every pair of silhouettes. Synthesizing them in order of distance from the viewpoint
4 completes view interpolation for dynamic regions of soccer scene. 5. VIDEO-BASED AUGMENTED REALITY SYSTEM One of the most important issues for AR systems is geometric registration between the real and the virtual world. This generates a correct view of a virtual object, and overlays it onto a view of the real world. Many kinds of methods for the geometric registration have been proposed, such as the methods using positioning sensors [3], vision-based method using captured AR images [10, 19, 25], and combining methods using both of them [4, 20]. In the proposed system, the virtual objects overlaid on the real world are the image of players and ball of the synthesized soccer scene from uncalibrated cameras. Therefore, even if the 3D positions and poses of the HMD can be obtained, it is useless for the registration between the virtual soccer scene and the desktop stadium model. Therefore we propose a new image-based registration method using projective geometry between cameras. 5.1 Calculating the position of the observer viewpoint In order to display soccer scene on the desktop stadium model, we need to generate the soccer scene at the same viewpoint of the HMD camera, and overlay the image onto the HMD. Our view-synthesis algorithm is based on view interpolation between neighboring two cameras near the virtual viewpoint as described in section 4. Then the position of the viewpoint of the generated soccer image is specified by three elements, which are (a) neighboring two reference cameras, (b) interpolating weight value between two reference cameras and (c) zoom ratio between the real camera and the virtual camera. We need to determine these elements from the HMD camera image, so that the HMD viewpoint can be same as the generated soccer image viewpoint Figure 3: Examples of edge images (top) and detected natural feature lines (bottom). Detecting feature lines The image of the soccer stadium model captured by the HMD camera contains natural feature lines, which are easy to track such as the lines of the penalty area or the goal area. We employ these natural feature lines instead of using any artificial markers. Thus the efforts for locating artificial markers can be reduced. The Canny operator [6] is first applied for edge detection and all edge points are mapped into the Hough space. The strong peaks that form the lines of penalty area and goal area are then found in the Hough space. The results of line detection are shown in Figure 3 (bottom) with the edge images (top). All elements for specifying virtual viewpoint position are determined based on these natural feature lines, which must be tracked in the HMD camera image at every frame. In the examples of Figure 3, 4 lines are tracked and used for determination of the viewpoint. Figure 4: Location of the vanishing points. sions of parallel lines appear to converge in a perspective projection. In the proposed method, orientations of the user s view are estimated by the position of the vanishing point. Two cameras whose orientations are the closest to the user s view orientation are selected as the reference cameras. In advance, locations of vanishing points in all viewpoint images captured at stadium are measured by extending lines of the goal area and the penalty area. Whenever HMD camera image is captured, the feature lines are detected in the image and the location is measured in the same way (See Figure 4). Just a horizontal component of the location of a vanishing point is used for calculation of viewpoint position because we assume that a user moves the viewpoint almost horizontally from side to side. We also assume that all cameras capturing a soccer match at a stadium are placed at the almost same height. According to such assumptions, we select two stadium images in which the location of the vanishing point is closest to the vanishing point in the HMD camera image as reference camera images. Then relative distance between the vanishing points of reference cameras and that of the HMD camera determines interpolating weight w as the following equation, w= Determination of reference cameras and interpolating weight We apply a vanishing point for selection of reference cameras and determination of interpolating weight between the cameras. Vanishing point is the point to which the exten- xhmd xstl xstr xstl (3) where VstL (xstl, ystl ) and VstR (xstr, ystr ) represent the vanishing points in two reference camera images, and also Vhmd (xhmd, yhmd ) represents the vanishing point in the HMD camera image.
5 Figure 6: Observation of soccer match on the desktop stadium model in the real world with HMD. Figure 5: Determination of the rendering positions on the HMD Determination of zoom ratio Just change of the interpolating weight does not generate the virtual view image at the same viewpoint of the HMD. For the registration between HMD camera image and stadium camera images, we need to determine the zoom ratio between these two cameras. If extrinsic parameters of the HMD camera and stadium cameras can be obtained, we can obtain the geometric registration between the HMD camera and stadium cameras. However, extrinsic and intrinsic parameters of cameras are unknown as the proposed method uses uncalibrated cameras. Therefore an image-based registration technique is required, and so the zoom ratio in addition to the interpolating weight is changed for the registration. Then we assume that the position change for the direction of the side line (right-and-left movement) can be controlled by the interpolating weight between reference stadium cameras, and also the position change for the direction of the goal line (back-and-forth movement) can be controlled by the zoom ratio between stadium cameras and the HMD camera. With this assumption, the focal length of the stadium camera f st and the HMD camera f hmd decide the zoom ratio as z = f hmd /f st. (4) The focal lengths of the HMD camera and stadium cameras are actually fixed, but the zoom ratio can be calculated by changing the focal length of the HMD camera virtually when we consider zooming as the change of the focal length. As uncalibrated cameras are used in our approach, the intrinsic parameters of the cameras are unknown. The focal length is computed with two vanishing points V 1(x v1, y v1) and V 2(x v2, y v2) by the following equation, x v1x v2 + y v1y v2 + f 2 = 0. (5) Here it is supposed the skew of the camera is 0, aspect ratio is 1, and principal point is the center of the image. The detailed explanation is found in [26]. 5.2 Presentation with HMD We use homography transformation for the registration between the HMD camera images and the synthesized soc- Figure 7: Camera configuration at soccer stadium. cer scenes. In order to generate natural views of a soccer match, the dynamic objects need to be rendered correctly onto the stadium model. Homographic matrix determines the positions of the players and ball on the HMD camera images. The homography represents transformation between the soccer ground plane of the real soccer scene and the plane of the stadium model captured in the HMD camera image. The homography is computed from more than 4 corner points of goal area and penalty area. The intersection points of detected feature lines are used as corner points of each area. The position of every player on HMD can be determined by transforming the position of the foot in the real camera images to the HMD image by homography of the ground plane. However, detecting the foot position in the real camera image is not stable and accurate, so the players may vibrate in the HMD image. In order to calculate correct positions of players, we use the centroide of the player region instead of the foot position. The centroide of ball region is used as well for rendering a ball. Centroidal lines of each player and ball (described as red line for ball in Figure 5) are projected onto HMD camera image from two reference camera image with the homography using the following equation, p hmd = H pst (6) where H is the homographic matrix that represents the transformation between the planes, and p st, p hmd are homogenous coordinates of the position in the reference camera image, in the HMD camera image accordingly. The intersection point of projected lines ǵ is the position of player/ball on the stadium model. Then we calculate the distance between the centroide and the ground plane, that is h 1 and h 2, by backprojection of intersection point ǵ to each reference image. The following equation gives the distance between
6 Figure 8: Soccer scenes taken at the real stadium and overlaid scene on the desktop stadium model. the centroide and the plane on the stadium model h in HMD camera image. h = (1 α)h1 z1 + αh2 z2 (7) where z1, z2 are zoom ratios and α is interpolating weight. Thus we can obtain the rendered positions of the dynamic regions on the stadium model even when players are jumping off the ground. Since it is obvious that players and ball exist on/over a soccer field, overlaying them onto the HMD camera images completes the destination image. 6. EXPERIMENTAL RESULTS We have implemented a free viewpoint observation system for actual soccer match. Figure 6 describes the proposed system where the observer see the desktop stadium model on the table through HMD. In preparation for observation, soccer matches were taken by multiple uncalibrated video cameras at two soccer stadiums. One is Oita stadium in Oita city, which is one of the stadiums the 2002 FIFA World Cup was held, and the other is Edogawa athletics stadium in Tokyo, Japan. As Figure 7 shows, a set of 4 fixed cameras was placed to one side of the soccer field to capture the penalty area mainly. The captured videos are converted to BMP format image sequences, which are composed of pixels, 24-bit-RGB color images. Secondly, fundamental matrices between the viewpoints of the cameras and homographic matrices between the planes form the ground in neighboring views are computed from images by manual selection of 50 corresponding feature points in the images. Then the positions of vanishing points are calculated in each image of actual camera positions with the lines of the goal/penalty area. In addition, for each frame of image sequences, silhouettes of the dynamic regions are extracted. After every region is segmented and labeled, the regions of the same player in the neighboring view images are corresponded by using homography of the ground plane between the views. The above process is implemented as preprocessing of arbitrary viewpoint observation and the dataset is stored in a server PC. When watching the soccer match from a remote location, the preprocessed data of soccer scene is transferred via Ethernet. The projective geometry, such as fundamental matrices and homographic matrices, and the vanishing point positions are supplied to a remote PC in advance. While observation, the texture and 2D positions of all player regions in two reference camera images and correspondence maps are sent to the remote PC for each frame according to the observer s viewpoint. At the remote PC, after observation starts, the lines indicate the goal area and the penalty area of stadium model are detected from the HMD camera image in each frame. Then virtual view image of the dynamic regions are synthesized in accordance with the viewpoint position determined by the feature lines. Next, homographic matrices of the field plane between each of reference camera images and the HMD camera image are calculated, and rendered position of the dynamic objects is determined with the homographic matrices. Finally the synthesized soccer scene is overlaid onto the real stadium model through the HMD. Online process is iterated until viewer stops observation of the soccer match. Figure 8 presents the captured soccer scenes at Edogawa stadium and results of overlaid soccer scenes on the stadium model. The first and the second columns are reference camera images used for virtual view generation, and the third column is the synthesized virtual view image. The fourth column is displayed soccer scene on the HMD and interpol-
7 Figure 9: Free viewpoint observation in mixed reality. Figure 10: Close-up view of the inserted soccer players and ball in the real world.
8 ating weight and zoom ratio are indicated as w and z at the bottom of each image. For example, the image on the top of the last columns was generated based on the parameters that interpolating ratio is 0.47 between camera 1 and camera 2, and zoom ratio is We see that the virtual ball, can be inserted naturally in the real world. When we compare the overlaid scene with the original soccer scene, the players are located at almost correct positions on the stadium model. The appearance of players and ball looks different in virtual view images and the overlaid soccer scenes. This is because the locations of players and ball are modified by homography transformation corresponding the appearances of the ground plane in HMD camera images. Thus overlaid soccer scene is comfortably fitted to the stadium model on HMD. Figure 9 shows free viewpoint video images of another soccer scenes taken at Oita stadium. Figure 10 presents some close-up views of the same scene. In Figure 9, from the top on the left to the bottom on the right, the results indicate the motion of players and ball are replayed smoothly. The rendered scenes look so natural that user does not feel any discomfort. However, the way to decide the viewpoint positions and zoom ratios is not stable enough. Therefore appearance of the objects sometimes has a small error. We are currently improving the method of determination of viewpoint position and zoom ratio stably. 7. CONCLUSIONS This paper has presented a method for free viewpoint video synthesis and presentation of sporting events in mixed reality. Arbitrary view synthesis algorithm for a dynamic event in a large space has been extended to the AR application so that the soccer match can be observed on a desktop stadium model in the real world. In order to overlay the virtual view images of the dynamic objects onto the real world environment, a novel registration method, which is based on projective geometry between cameras, is introduced. Only natural feature lines in HMD camera images are used for the geometric registration, without any artificial markers or sensor devices. The strong calibration of the HMD camera and multiple cameras that capture the subject is not necessary. The proposed method can be applied to observations not only on the desktop stadium model, but also at any place where user likes. It can be possible that sporting events, such as Olympic games or World cup games, held in a foreign country can be observed at the domestic stadium with the proposed concepts for mixed reality entertainment. 8. ACKNOWLEDGMENTS This work is supported in part by a Grant in Aid for the 21st century Center of Excellence for Optical and Electronic Device Technology for Access Network from the Ministry of Education, Culture, Sport, Science, and Technology in Japan. The first author is a JSPS Research Fellow and partly supported by JSPS Research Fellowship for Young Scientists. 9. REFERENCES [1] S. Avidan and A. Shashua. Novel view synthesis by cascading trilinear tensors. IEEE Trans. on Visualization and Computer Graphics, 4(4): , [2] R. T. Azuma. A survey of augmented reality. Presence, 6(4): , [3] M. Bajura, H. Fuchs, and R. Ohbuchi. Merging virtual objects with the real world: Seeing utlrasound. Commun of the ACM, 36(7):52 62, [4] M. Bajura and U. Neumann. Dynamic registration correction in video-based augmented reality system. IEEE Computer Graphics and Applications, 15(5):52 60, [5] T. Beier and S. Neely. Feature-based image metamorphosis. Proc. of SIGGRAPH 92, pages 35 42, [6] J. Canny. Computational approach to edge detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 8(6): , [7] S. E. Chen and L. Williams. View interpolation for image synthesis. Proc. of SIGGRAPH 93, pages , [8] CyberPlay. [9] W. Du, H. Li, and A. Gagalowicz. Video based 3d soccer scene reconstruction. Proc. of Mirage 2003, pages 70 75, Mar [10] V. Ferrari, T. Tuytelaars, and L. V. Bool. Markerless augmented reality with a real-time affine region tracker. Proc. of the IEEE and ACM Intl. Symposium on Augmented Reality, pages 87 96, [11] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge University Press, [12] N. Inamoto and H. Saito. Fly through view video generation of soccer scene. International Workshop on Entertainment Computing (IWEC2002) Workshop Note, pages , May [13] N. Inamoto and H. Saito. Intermediate view generation of soccer scene from multiple videos. Proc. of International Conference on Pattern Recognition (ICPR2002), 2: , August [14] N. Inamoto and H. Saito. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation. Proc. of Visual Communications and Image Processing (VCIP2003), SPIE, 5150(122), July [15] N. Inamoto and H. Saito. Immersive observation of virtualized soccer match at real stadium model. The Second International Symposium on Mixed and Augmented Reality (ISMAR03), pages , October [16] T. Kanade, P. J. Narayanan, and P. W. Rander. Virtualised reality: concepts and early results. Proc. of IEEE Workshop on Representation of Visual Scenes, pages 69 76, [17] I. Kitahara, Y. Ohta, H. Saito, S. Akimichi, T. Ono, and T. Kanade. Recording multiple videos in a large-scale space for large-scale virtualized reality. Proc. of International Display Workshops (AD/IDW 01)s, pages , [18] T. Koyama, I. Kitahara, and Y. Ohta. Live mixed-reality 3d video in soccer stadium. The Second International Symposium on Mixed and Augmented Reality (ISMAR03), pages , October [19] K. N. Kutulakos and J. Vallino. Affine object representations for calibration-free augmented reality.
9 Proc. IEEE Virtual Reality Ann. Int. Symp.(VRAIS 96), [20] U. Neumann, S. You, J. Hu, B. Jiang, and J. W. Lee. Augmented virtual environments (ave): Dynamic fusion of imagery and 3d models. Proc. of the IEEE Virtual Reality 2003, pages [21] S. Prince, A. D. Cheok, F. Farbiz, T. Williamson, N. Johnson, M. Billinghurst, and H. kato. 3d live: Real time captured content for mixed reality. Proc. of the International Symposium on Mixed and Augmented Reality (ISMAR 02), pages 7 13, September [22] H. Saito, S. Baba, and T. Kanade. Appearance-based virtual view generation from multicamera videos captured in the 3-d room. IEEE Trans. on Multimedia, 5(3): , September [23] S. M. Seitz and C. R. Dyer. View morphing. Proc. of SIGGRAPH 96, pages 21 30, [24] S. M. Seitz and C. R. Dyer. Photorealistic scene reconstruction by voxel coloring. Proc. Computer Vision and Pattern Recognition (CVPR1997), pages , [25] Y. Seo and K. Hong. Calibration-free augmented reality in perspective. IEEE Trans. on Visualization and Computer Graphics, 6(4): , [26] G. Simon, A. W. Fitzgibbob, and A. Zisserman. Markerless tracking using planar structures in the scene. Proc. of the International Symposium on Augmented Reality, pages , Oct [27] R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation, RA-3(4): , August [28] M. D. Wheeler, Y. Sato, and K. Ikeuchi. Consensus surfaces for modeling 3d objects from multiple range images. DARPA Image Understanding Workshop, [29] S. Yaguchi and H. Saito. Arbitrary viewpoint video synthesis from multiple uncalibrated cameras. IEEE Trans. on Systems, Man and Cybernetics, PartB, 34(1): , February 2004.
FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE
FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest
More informationPlayer Viewpoint Video Synthesis Using Multiple Cameras
Player Viewpoint Video Synthesis Using Multiple Cameras Kenji Kimura *, Hideo Saito Department of Information and Computer Science Keio University, Yokohama, Japan * k-kimura@ozawa.ics.keio.ac.jp, saito@ozawa.ics.keio.ac.jp
More informationASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts
ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts Songkran Jarusirisawad, Kunihiko Hayashi, Hideo Saito (Keio Univ.), Naho Inamoto (SGI Japan Ltd.), Tetsuya Kawamoto (Chukyo Television
More informationRecent Trend for Visual Media Synthesis and Analysis
1 AR Display for Observing Sports Events based on Camera Tracking Using Pattern of Ground Akihito Enomoto, Hideo Saito saito@hvrl.ics.keio.ac.jp www.hvrl.ics.keio.ac.jp HVRL: Hyper Vision i Research Lab.
More informationVision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes
Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Yuo Uematsu and Hideo Saito Keio University, Dept. of Information and Computer Science, Yoohama, Japan {yu-o,
More informationAppearance-Based Virtual View Generation From Multicamera Videos Captured in the 3-D Room
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 5, NO. 3, SEPTEMBER 2003 303 Appearance-Based Virtual View Generation From Multicamera Videos Captured in the 3-D Room Hideo Saito, Member, IEEE, Shigeyuki Baba, and
More informationModeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences
Modeling, Combining, and Rendering Dynamic Real-World Events From Image s Sundar Vedula, Peter Rander, Hideo Saito, and Takeo Kanade The Robotics Institute Carnegie Mellon University Abstract Virtualized
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationOcclusion Detection of Real Objects using Contour Based Stereo Matching
Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationCompositing a bird's eye view mosaic
Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationFree Viewpoint Video Synthesis based on Visual Hull Reconstruction from Hand-Held Multiple Cameras
Free Viewpoint Video Synthesis based on Visual Hull Reconstruction from Hand-Held Multiple Cameras Songkran Jarusirisawad and Hideo Saito Department Information and Computer Science, Keio University 3-14-1
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationA Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India
A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract
More informationBut First: Multi-View Projective Geometry
View Morphing (Seitz & Dyer, SIGGRAPH 96) Virtual Camera Photograph Morphed View View interpolation (ala McMillan) but no depth no camera information Photograph But First: Multi-View Projective Geometry
More informationMR-Mirror: A Complex of Real and Virtual Mirrors
MR-Mirror: A Complex of Real and Virtual Mirrors Hideaki Sato 1, Itaru Kitahara 1, and Yuichi Ohta 1 1 Department of Intelligent Interaction Technologies, Graduate School of Systems and Information Engineering,
More informationImage Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003
Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationD-Calib: Calibration Software for Multiple Cameras System
D-Calib: Calibration Software for Multiple Cameras Sstem uko Uematsu Tomoaki Teshima Hideo Saito Keio Universit okohama Japan {u-ko tomoaki saito}@ozawa.ics.keio.ac.jp Cao Honghua Librar Inc. Japan cao@librar-inc.co.jp
More information1-2 Feature-Based Image Mosaicing
MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato
More informationA Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking
A Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking Masayuki Kanbara y, Hirofumi Fujii z, Haruo Takemura y and Naokazu Yokoya y ygraduate School of Information Science, Nara
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More information3D Corner Detection from Room Environment Using the Handy Video Camera
3D Corner Detection from Room Environment Using the Handy Video Camera Ryo HIROSE, Hideo SAITO and Masaaki MOCHIMARU : Graduated School of Science and Technology, Keio University, Japan {ryo, saito}@ozawa.ics.keio.ac.jp
More informationUsing Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering
Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering Cédric Verleysen¹, T. Maugey², P. Frossard², C. De Vleeschouwer¹ ¹ ICTEAM institute, UCL (Belgium) ; ² LTS4 lab,
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationA virtual tour of free viewpoint rendering
A virtual tour of free viewpoint rendering Cédric Verleysen ICTEAM institute, Université catholique de Louvain, Belgium cedric.verleysen@uclouvain.be Organization of the presentation Context Acquisition
More informationA New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction
A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical
More informationINTERACTIVE VIRTUAL VIEW VIDEO FOR IMMERSIVE TV APPLICATIONS
INTERACTIVE VIRTUAL VIEW VIDEO FOR IMMERSIVE TV APPLICATIONS C. Fehn, P. Kauff, O. Schreer and R. Schäfer Heinrich-Hertz-Institut, Germany ABSTRACT This paper presents an evolutionary approach on Immersive
More informationImprovement of Accuracy for 2D Marker-Based Tracking Using Particle Filter
17th International Conference on Artificial Reality and Telexistence 2007 Improvement of Accuracy for 2D Marker-Based Tracking Using Particle Filter Yuko Uematsu Hideo Saito Keio University 3-14-1 Hiyoshi,
More informationAn Algorithm for Seamless Image Stitching and Its Application
An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.
More informationVehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video
Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationOn-line Document Registering and Retrieving System for AR Annotation Overlay
On-line Document Registering and Retrieving System for AR Annotation Overlay Hideaki Uchiyama, Julien Pilet and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku Yokohama, Japan {uchiyama,julien,saito}@hvrl.ics.keio.ac.jp
More informationFACIAL ANIMATION FROM SEVERAL IMAGES
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationImage-Based Deformation of Objects in Real Scenes
Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method
More informationAugmented Reality of Robust Tracking with Realistic Illumination 1
International Journal of Fuzzy Logic and Intelligent Systems, vol. 10, no. 3, June 2010, pp. 178-183 DOI : 10.5391/IJFIS.2010.10.3.178 Augmented Reality of Robust Tracking with Realistic Illumination 1
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationAR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor
AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationPassive 3D Photography
SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,
More informationTEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA
TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,
More informationMultiple View Geometry of Projector-Camera Systems from Virtual Mutual Projection
Multiple View Geometry of rojector-camera Systems from Virtual Mutual rojection Shuhei Kobayashi, Fumihiko Sakaue, and Jun Sato Department of Computer Science and Engineering Nagoya Institute of Technology
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationOutline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2
Outline Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration
More informationA High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More information3D Digitization of a Hand-held Object with a Wearable Vision Sensor
3D Digitization of a Hand-held Object with a Wearable Vision Sensor Sotaro TSUKIZAWA, Kazuhiko SUMI, and Takashi MATSUYAMA tsucky@vision.kuee.kyoto-u.ac.jp sumi@vision.kuee.kyoto-u.ac.jp tm@i.kyoto-u.ac.jp
More informationAutomatically Synthesising Virtual Viewpoints by Trinocular Image Interpolation
Automatically Synthesising Virtual Viewpoints by Trinocular Image Interpolation Stephen Pollard, Sean Hayes, Maurizio Pilu, Adele Lorusso Digital Media Department HP Laboratories Bristol HPL-98-05 January,
More informationMulti-view reconstruction for projector camera systems based on bundle adjustment
Multi-view reconstruction for projector camera systems based on bundle adjustment Ryo Furuakwa, Faculty of Information Sciences, Hiroshima City Univ., Japan, ryo-f@hiroshima-cu.ac.jp Kenji Inose, Hiroshi
More informationVisualization 2D-to-3D Photo Rendering for 3D Displays
Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationEfficient Stereo Image Rectification Method Using Horizontal Baseline
Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,
More information5LSH0 Advanced Topics Video & Analysis
1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationImage Base Rendering: An Introduction
Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to
More informationMapping textures on 3D geometric model using reflectance image
Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationOn Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications
ACCEPTED FOR CVPR 99. VERSION OF NOVEMBER 18, 2015. On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications Peter F. Sturm and Stephen J. Maybank Computational Vision Group,
More informationCamera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006
Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation
More informationSynthesizing Realistic Facial Expressions from Photographs
Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1
More informationMulti-view Surface Inspection Using a Rotating Table
https://doi.org/10.2352/issn.2470-1173.2018.09.iriacv-278 2018, Society for Imaging Science and Technology Multi-view Surface Inspection Using a Rotating Table Tomoya Kaichi, Shohei Mori, Hideo Saito,
More informationProjection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
More informationImage warping and stitching
Image warping and stitching Thurs Oct 15 Last time Feature-based alignment 2D transformations Affine fit RANSAC 1 Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize
More information3D Reconstruction from Scene Knowledge
Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW
More informationComputer Vision / Computer Graphics Collaboration for Model-based Imaging, Rendering, image Analysis and Graphical special Effects
Mirage 2003 Proceedings Computer Vision / Computer Graphics Collaboration for Model-based Imaging, Rendering, image Analysis and Graphical special Effects INRIA Rocquencourt, France, March, 10-11 2003
More informationImage Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve
Image Morphing Application: Movie Special Effects Morphing is turning one image into another (through a seamless transition) First movies with morphing Willow, 1988 Indiana Jones and the Last Crusade,
More information3D Modeling using multiple images Exam January 2008
3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationProject report Augmented reality with ARToolKit
Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationTEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA
TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi, Francois de Sorbier and Hideo Saito Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku,
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationOverview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers
Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationComputer Science. Carnegie Mellon. DISTRIBUTION STATEMENT A ApproyedforPublic Release Distribution Unlimited
Computer Science Carnegie Mellon DISTRIBUTION STATEMENT A ApproyedforPublic Release Distribution Unlimited Appearance-Based Virtual View Generation of Temporally-Varying Events from Multi-Camera Images
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationPerson identification from spatio-temporal 3D gait
200 International Conference on Emerging Security Technologies Person identification from spatio-temporal 3D gait Yumi Iwashita Ryosuke Baba Koichi Ogawara Ryo Kurazume Information Science and Electrical
More informationA Summary of Projective Geometry
A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationEstimation of Camera Pose with Respect to Terrestrial LiDAR Data
Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present
More informationMeasurement of 3D Foot Shape Deformation in Motion
Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The
More informationShape as a Perturbation to Projective Mapping
Leonard McMillan and Gary Bishop Department of Computer Science University of North Carolina, Sitterson Hall, Chapel Hill, NC 27599 email: mcmillan@cs.unc.edu gb@cs.unc.edu 1.0 Introduction In the classical
More informationSkeleton Cube for Lighting Environment Estimation
(MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School
More informationPerception and Action using Multilinear Forms
Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationDETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS
DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural
More informationWATERMARKING FOR LIGHT FIELD RENDERING 1
ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:
More informationSome books on linear algebra
Some books on linear algebra Finite Dimensional Vector Spaces, Paul R. Halmos, 1947 Linear Algebra, Serge Lang, 2004 Linear Algebra and its Applications, Gilbert Strang, 1988 Matrix Computation, Gene H.
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More information