Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints
|
|
- Stanley Campbell
- 6 years ago
- Views:
Transcription
1 Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 4-1-8, Honcho, Kawaguchi-shi, Saitama, Japan Masatoshi Okutomi, Shigeki Sugimoto Tokyo Institute of Technology , O-okayama, Meguro, Tokyo, Japan Abstract In this paper, we propose a novel method for panoramic 3D scene recovery using rotational stereo cameras with simple epipolar constraints. By rotating two parallel stereo cameras about a vertical axis with a constant velocity, we acquire two sampled spacio-temporal volumes which are made of the sequential images captured by a uniform angular interval. The two spacio-temporal volumes can be resampled into a set of multi-perspective panoramas. We analyze the epipolar geometry among images (panoramas and original images) of two spacio-temporal volumes. The result shows that only three types of simple epipolar constraints (epipolar line is row or column of image) exist in the two spacio-temporal volumes. Then we compute a depth map from four image pairs using a multi-baseline algorithm with the three types of epipolar constraints; that is horizontal, vertical and combination of them. Experimental results using both synthetic and real images show that our approach produces high quality panoramic 3D reconstruction. 1. Introduction 3D reconstruction for panoramic (3 degrees) environments is an important issue for various applications such as virtual reality, robot navigation, and environmental simulation. A lot of methods have been developed for panoramic 3D reconstruction of real scenes. We can roughly divide these methods into two approaches: one is stereo reconstruction from perspective panoramas (i.e. single-viewpoint images), and the other is from multi-perspective panoramas. This research described in this paper was conducted while the first author was a PhD student at Tokyo Institute of Technology. He is currently on loan to National Institute of Industrial Safety: 1-4-6, Umezono, Kiyose, Tokyo, JAPAN In the first approach, two (or more) omni-directional cameras are used for estimating a depth map. Various types of omni-directional (or panoramic) imaging sensors have been designed [1, 5], and a geometric analysis for these sensors has been given by Baker and Nayar [1]. An omnidirectional camera can capture a whole scene at a time into a panorama. Therefore this approach is adequate for real-time applications such as robot navigation and video surveillance. However, in general, panoramas captured by omni-directional sensors have low spatial resolution which engenders low measurement accuracy and limited density of depth maps. Instead of using such special sensors, Kang and Szeliski [3] rotate standard cameras and create dense perspective (cylindrical) panoramas by resampling the regular images. Then they estimate a dense depth map from the perspective panoramas. The restricted camera motion is beneficial for producing high quality depth maps. However, for creating desirable panoramas, we have to ensure that each camera is rotating about an axis passing through its optical center. Such careful settings may be required for different environments individually. On the other hand, panoramic stereo reconstruction from multi-perspective panoramas [12] 1 has attracted recent attentions [2, 4, 9, 7, 11]. A set of multi-perspective panoramas is created by resampling regular perspective images captured by a single rotating camera whose optical center is offset from its rotation axis. Compared with the approach using perspective panoramas, this approach has mainly two advantages. First, only a single camera (mounted on a rotational stage) is required. This property enables us to design a portable and inexpensive image acquisition system. Secondly, if we create two multi-perspective panoramas by extracting two symmetrical columns from the regular images, the epipolar geometry consists of simple horizontal lines [4, 7, 9, 11]. The second advantage can be valid in such two symmetrical perspective panoramas. 1 It is actually equal to multiple-center-of projection image [10], manifold mosaic [8], and circular projection [9]. 1
2 y right panoramic image P Rotation axis r Optical center Optical axis left slit y x center left panoramic image (a) Single rotating camera. (b) Two panoramic images. Figure 1. Traditional panoramic imaging using single rotating camera. y right slit input images 2ψ slit baseline length slit r O Figure 2. Baseline length. 2ψ α ( < 2ψ ) However, we can indicate the following two weak points in the use of a single camera. The first weak point is that the accuracy of depth estimation is limited by horizontal FOV of the single camera. It is well known that the accuracy of depth estimation is influenced by the stereo baseline length about two images. If we use a single rotating camera, the maximum baseline length about two multiperspective panoramas is limited by not only the rotation radius of the camera but also the horizontal angle of the camera s FOV (field of view). When we require a large baseline length making full use of the rotational radius, in general, we need a camera which has a large FOV or multiple cameras. The second weak point is that depth estimation tends to suffer from noise and repeated patterns. It is also well known that the depth estimation from only two images suffers from image noise and repeated patterns in the scene. For improvement of this problem, Li et al. [4] have proposed a approach to estimate depths from a number of multi-perspective panoramas using a cylinder sweep algorithm or a multi-baseline matching technique on approximated horizontal epipolar geometry. For improving these weak points, we propose a novel method for panoramic 3D reconstruction using rotational stereo cameras. In our method, we use two large collections of images taken by rotating two parallel cameras whose motions are constrained to a planar concentric circle. The two collections of regular perspective images are resampled into four multi-perspective panoramas. Then we estimate a depth map from four image pairs using a multi-baseline stereo algorithm with three types of simple epipolar constraints; that is horizontal, vertical and combination of them. In the case of depth estimation using multi-perspective panoramas from a single camera, the baseline length is limited by the horizontal FOV of the single camera. We can show that our method using two cameras is equivalent to a method using a single virtual camera which has a larger horizontal FOV than that of an actual single camera. This improves the accuracy of the depth estimation as mentioned above. Moreover, a multi-baseline stereo algorithm with four image pairs produces high quality depth maps by averaging out noise and reducing ambiguities caused by repeated patterns. The remainder of the paper is structured as follows. Panorama imaging from two rotating cameras is shown in Section 2. Section 3 describes the epipolar geometries among images of two spacio-temporal volumes. Depth estimation using a multi-baseline algorithm is described in Section 4. We present experimental results for showing the validity of our method in Section 5, and conclude this paper in Section Panorama Imaging In this section, we describe a panorama imaging process using two rotating cameras. In the beginning, let us roughly explain a traditional method which generates multiperspective panoramas using a single off-center rotational camera. Then we extend the single camera case to the use of two cameras Traditional panorama imaging A camera is mounted on a rotational stage so that the optical center of the camera is offset from the rotation center as shown in Figure 1(a). The camera is looking outward from the rotational center. A stereo pair of multi-perspective panoramas is generated by extracting two symmetric slits of each regular image as shown in Figure 1(b). In this case, epipolar lines are the panorama rows [11]. Figure 2 shows that a 3D point P is projected on the two slits of the images captured by the camera with two different positions. Its depth is estimated by finding corresponding points on the two panoramas. The accuracy of depth estimation is effected by a baseline length indicated by a thick line between two cameras in Figure 2. However, the baseline length depends on an angle (indicated by α) between the two positions of the camera. Letting 2ψ be an
3 angle between the two slits, α is no more than 2ψ. This means that the accuracy of depth estimation is limited by horizontal FOV of the camera. slit angle left slit ψ right slit virtual slit angle virtual optical axis 2.2. Stereo panorama imaging We use two parallel cameras mounted on a rotational stage as shown in Figure 3. The two cameras rotate about a vertical axis passing through the center of a line which connects two optical centers of the two cameras. We obtain two spacio-temporal volumes which are composed of the regular images acquired by the two cameras as shown in Figure 4, and then produce four panoramas by extracting symmetric y- slits from two spacio-temporal volumes as shown in Figure 4. y x Ll Cl Cr Figure 3. Two parallel rotating cameras. Lo P ll P lo Lr Rl Prr P rl Rr Ro Figure 4. Two spacio-temporal volumes and four multi-perspective panoramas. These panoramas are denoted as Ll, Lr, Rl and Rr respectively, as shown in figure 4. The capital letters denote the positions of cameras, and small letters denote the relative slit positions used for extracting panoramas. Assuming that one of the two cameras is virtually moved by 180 along rotational path, so as to make the two optical centers correspond each other, we can consider our system with two cameras is equivalent to a system which has a single virtual camera with four slits, as shown in Figure 5(b). P rr (a) single camera system right slit left slit ψ left slit ψ (b) two camera system Figure 5. Comparision of slit angle. right slit The four panoramas acquired by the virtual camera can be obtained by rectifying panoramas acquired by our imaging system. For example, when panorama Rl, Rr are shifted by 180 in the direction, the four panoramas are the same as the panoramas generated from the four slits of the virtual camera. Figure 5(a)(b) show that the angle between two slits of the virtual camera (b) is much larger than that of the single camera (a). This means that the accuracy of the depth estimation can be improved by using our imaging system with two cameras. 3. Epipolar Geometry In general, epipolar constraints are used for reducing the search space of matching points in stereo vision. We use three types of simple epipolar constraints for finding corresponding points between the base panorama Rl and each of other reference images. In this section, we explain (but omit proofs) that only three types of simple epipolar constraints exist in the two spacio-temporal volumes. p lr φ Lr Cl p ll C' l ΔLr O P Q l ΔRl C' r Δ r Rr R p rr Figure 6. Geometric relation of two cameras Horizontal epipolar constraints on panoramas Figure 6 shows that a 3-D point P with a depth l is projected on each of four slits of the cameras at four different positions. C r and C r denote the positions of the right φ φ Rl Cr p rl 0
4 camera (optical centers) where P is projected on the left and the right slit, respectively. C l and C l denote the positions of the left camera in the same manner. The projected points P ll,p lr,p rl, and P rr appear on the four panoramas Ll, Lr, Rl and Rr (see Figure 4), respectively. Let ( 0,y 0 ) be the coordinates value of the base point P rl on panorama Rl, and ( lr,y lr ) be the coordinates of the reference point P lr on panorama Lr. We can be easily see that y lr = y 0, since the two rays C r P and C l P are symmetrical about the line OP. Therefore the search for the corresponding point P lr is restricted on a horizontal line at y = y 0 in the reference panorama Lr. Therefore we call it a horizontal epipolar constraint. Horizontal epipolar constraints also exist between Rr and Ll. Let f(l) be the horizontal disparity between P rl and P lr. By using the triangle QOC r in Figure 6, f(l) is represented as follows : f(l) =2Δ Rl π, (1) where Δ Rl = arctan l2 tan ψ + r l 2 (1+tan ψ) r 2 r 2 l 2, tan ψ where r is the camera rotation radius, and 2ψ denotes the angle between two slits of the camera. The depth l can be computed from a horizontal disparity f(l) obtained by matching the points P rl and P lr Vertical epipolar constraints on panoramas In Figure 6, Δ R represents the rotation angle between C r and C r. By using triangle OC r C r, Δ R is represented as follows: Δ R = φ Rl φ Rr = constant, (2) where φ Rl is the angle between the extensions of OC r and C r Q, and φ Rr is the angle between the extensions of OC r and C rq. Eq.(2) indicates that the angle Δ R has a constant value independent of the depth l of the point P. Therefore, the search for the corresponding point P rr on the reference panorama Rr is restricted on a column. We call it a vertical epipolar constraint. Vertical epipolar constraints also exist between Lr and Ll. Using trigonometric relation between the triangle C r QP and C rqp, the vertical disparity g(l) between P rl and P rr is represented as follows : g(l) = 2ry 0 sin ψ r sin ψ + l 2 r 2 cos 2 ψ, (3) where y 0 is the vertical coordinate of the base point P rl, and the other notations have the same meaning in Eq.(1). Then depth l can be computed from a vertical disparity g(l) obtained by matching the points P rl and P rr Horizontal epipolar constraints on panorama and original image In our approach, two cameras are setup as normal parallel stereo vision. In this case, horizontal epipolar constraints are also satisfied between two original images (Lo and Ro in Figure 4). We can see the base point P rl on panorama Rl exists original image Ro too. Therefore, a reference point, denoted by P lo in Figure 4, on the original image Lo has the same vertical coordinate with the base point P rl. This means that the search for the corresponding point P lo is restricted on a horizontal line at y = y 0 on the reference image Lo. That is, horizontal epipolar constraints exist between the panorama Rl and the original image Lo. Let d(l) be the disparity between P rl and P lo, we can easily write the relation between disparity d(l) and the depth l as follows: d(l) = 2rf, (4) l where f is the focal length, and the other notations are the same in Eq.(1). Then depth l can be computed from horizontal disparity d(l) obtained by matching the points P rl and P lo. 4. Depth Estimation Using Multi-baseline Algorithm horizontal constraint horizontal constraint Rl 0 Lo 0 Lr y 0 x 0 P lo vertical constraint Rr y 0 base point P rl horizontal and vertical constraint φ Rl - φ Rr P rr g(l) Ll φ Ll horizontal constraint f(l) P lr vertical constraint - φ Lr + f(l) Figure 7. Epipolar constraints among five images. Figure 7 summarizes relations among the five images (four panoramas and an original image) and the three types of epipolar constraints that are described in Section 3. According to Section 3, the depth l can be estimated by using arbitrary one of the four stereo pairs. However, the depth estimation using only one pair often causes matching errors due to image noise and repeated patterns. In this paper, for reducing such errors, we compute the distance of each pixel of base panorama Rl using a multi-baseline algorithm [6] with simple epipolar constraints. P ll g(l)
5 4.1. Definition of SSD for each stereo pair We now define four SSD (Sum of Squared Differences) values. Each of them represents a similarity between a region of the base panorama Rl and that of each reference images. Then we define a SSSD (Sum of SSD) value which sums up the four SSD values. SSD between panorama Rl and Lr Let Rl(, y) and Lr(, y) be the pixel value at (, y) in panorama Rl and Lr, respectively. As mentioned in Section 3.2, the horizontal constraints exist between panorama Rl and Lr. The constraints indicate that the search for a point (, y) on panorama Lr corresponding to ( 0,y 0 ) on panorama Rl is restricted by y = y 0. Additionally, the disparity is determined by = 0 + f(l) when a depth l is decided. As a consequence, we define the SSD value that represents the similarity between Rl( 0,y 0 ) and Lr(, y) as follows: SSD Rl,Lr ( 0,y 0,l)= [Rl( 0 + i, y 0 + j) i,j W Lr( 0 + i + f(l),y 0 + j)] 2, (5) where W is a window for matching, and f(l) is determined by Eq.(1). SSD between panorama Rl and Rr The vertical constraints between panorama Rl and Lr denote that the search for a point (, y) on panorama Rr corresponding to ( 0,y 0 ) on panorama Rl is restricted by = 0 +Δ R. Moreover the disparity is determined by y = y 0 + g(l) when a depth l is decided. Letting Rr(, y) be the pixel value at (, y) in panorama Rr, we define the SSD value that represents the similarity between Rl( 0,y 0 ) and Rr(, y) as follows: SSD Rl,Rr ( 0,y 0,l)= i,j W [Rl( 0 + i, y 0 + j) Rr( 0 +Δ R + i, y 0 + g(l)+j)] 2, (6) where g(l) is determined by Eq.(3). SSD between panorama Rl and Ll Epipolar constraints between panorama Rl and Ll is neither horizontal nor vertical. But we can describe it as a combination of the vertical and horizontal constraints as shown in Figure 7. Considered with constraints via Rl Lr and Lr Ll, the horizontal constraint between panorama Rl and Lr is described in 3.1, and the vertical constraint between panorama Lr and Ll is actually the same between panorama Rl and Rr described in 3.2. This means that the disparity on panorama Ll is determined by = 0 +Δ R + f(l) and y = y 0 + g(l), when a depth l is decided. Using Ll(, y) in the same manner, we define the SSD value that represents the similarity between Rl( 0,y 0 ) and Ll(, y) as follows: SSD Rl,Ll ( 0,y 0,l)= [Rl( 0 + i, y 0 + j) i,j W Ll( 0 +Δ R + f(l)+i, y 0 + g(l)+j)] 2. (7) SSD between panorama Rl and original image For each column of base panorama Rl, we can extract a corresponding original image from the left spacio-temporal volume. As mentioned above, the horizontal epipolar constraints exist between the column and the extracted original image. By letting Lo(x, y 0 ) be the pixel value on the reference image Lo, we describe the SSD value between panorama Rl and original image Lo as following: SSD Rl,Lo ( 0,y 0,l)= [Rl( 0 + i, y 0 + j) i,j W Lo(x 0 + i + d(l),y 0 + j)] 2, (8) where x 0 is the x-coordinate of the base point P rl in the right spacio-temporal volume Depth estimation from SSSD Normally, the correct depth can minimize each of the SSD values defined above. In addition, we can obtain an estimated depth ˆl that minimizes one SSD. However, such depth estimation often causes errors due to image noise and repeated patterns. Therefore we compute SSSD (Sum of SSD) for averaging out noise and reducing ambiguities [6]. SSSD is defined as follows: SSSD Rl ( 0,y 0,l)= SSD Rl,Lr ( 0,y 0,l) + SSD Rl,Rr ( 0,y 0,l) + SSD Rl,Ll ( 0,y 0,l) + SSD Rl,Lo ( 0,y 0,l). (9) We obtain the estimated depth ˆl that minimizes the SSSD. 5. Experimental Results For evaluating the proposed approach, we estimate depth maps of both synthetic and real scenes. We also show comparison results between a previous method which uses a single camera [7] and our method. In all experiments, the window size W for computing the SSD values is 5 5, the rotational radius r is 20[cm], and the horizontal angle ψ between the left and the right slit of the camera is Experimental results of synthetic scenes We created a room (2.5[m] 2.5[m]) and mapped textures on its walls by computer graphics. Then we obtained
6 (a) Base(left) panorama generated by previous method [7] (b) Depth map estimated by previous method [7] (c) Base panorama Rl generated by the proposed method (d) Depth map estimated by the proposed method Figure 8. Experimental results of synthesized scene the original images ( pixels) with additional Gaussian noise (mean=0 and standard deviation=2[gray level]). Figure 8 shows several results for the synthetic environment using the both methods. Figure 8(a) and (c) indicate the base panoramic images (3 multi-perspective panoramas) generated by the tow methods. Figure 8(b) and (d) show the estimated depth-maps. We easily see the result of the previous method (b) is remarkably influenced by image noise and occlusions. In contrast, the result of our method (d) is greatly improved by reducing errors caused by noises, repeated patterns and occlusions (see regions in the right of a ball and a cylinder which exist in the room). For evaluating the effects of the proposed device (its large baseline length) and the proposed algorithm separately, we also compared estimated cross-sections. Figure 9 shows the estimated cross sections of the same scene. In the case without image noise, as shown in Figure 9(a), the straight lines corresponding to the walls in the synthetic scene seems to be stairs because of quantization errors caused by a small baseline length in the previous method. On the other hand, thanks to a larger baseline length, our methods obtained smooth lines as shown in Figure 9(b). The effects of the proposed algorithm can be observed by comparing Figure 9 (c) and (d), where the errors caused by noises and occlusions are clearly reduced by the multibaseline algorithm Experimental results of real scenes We also apply the both methods to a real scene. Figure 10 shows our image acquisition device composed with two parallel cameras and a rotational stage. Figure 11 shows several results by the two methods. Figure 11 (a)(c) show 3 multi-perspective panoramas, and (b)(d) show estimated dense depth-maps. Comparing results (b) and (d), we can observe that the estimation result by the proposed method is greatly improved. Figure 10. Image acquisition device. Figure 12 shows estimated cross sections. Comparing between Figure 11 (a) and (b) indicates that our method is able to improve the accuracy of depth estimation for real scenes. To demonstrate our high-quality 3D reconstruction
7 4000(mm) (mm) (mm) (mm) (a) (b) (c) (d) Figure 9. Comparison of estimated cross-section of a synthetic scene. (a): by previous method using the 100th row of the base panorama (Figure 8(a)) without noise. (b): by proposed method using the 100th row of the base panorama (Figure 8(c)) without noise. (c): the same as (a) in the case with noise. (d): the same as (b) in the case with noise. for real scenes, we also show a top-down view and a center view of the reconstructed real scene, in Figure 13(a)(b). We can see that the proposed method obtains satisfactory results. 6. Summary and Conclusions In this paper, we have proposed a novel method for panoramic 3-D reconstruction using rotational stereo cameras. In the proposed method, two spacio-temporal volumes are acquired by two parallel cameras. Then we estimate a depth map from four image pairs using a multi-baseline algorithm with the three types of epipolar constraints. The proposed method offers a larger baseline length and has produced high quality depth maps by averaging out noise and reducing ambiguities caused by repeated patterns. In the future work, we study a method which maintains the simplicity of epipolar constraints using more than two cameras. We will also examine approaches using a telephoto lens to outdoor environments. References [1] S. Baker and S. K. Nayar. A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision, 35(2):1 99, [2] W. Jiang, S. Sugimoto, and M. Okutomi. Panoramic 3D reconstruction using rotating camera with planar mirrors. In Proceedings of Omnivis 05, pages 123 1, October [3] S. B. Kang and R. Szeliski. 3-D scene data recovery using omnidirectional multibaseline stereo. International Journal of Computer Vision, 25(2): , [4] Y. Li, H.-Y. Shum, C. K. Tang, and R. Szeliski. A stereo reconstruction from mutiperspective panoramas. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(1):45 62, January , 2 [5] S. K. Nayar and V. Peri. Folded catadioptric cameras. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages , June [6] M. Okutomi and T. Kanade. A multiple-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(4): , , 5 [7] P. Peer and F. Solina. Panoramic depth imageing:single standard camera. International Journal of Computer Vision, 47(1-3):149 1, , 5, 6, 8 [8] S. Peleg and M. Ben-Ezra. Stereo panorama with a single camera. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages , June [9] S. Peleg, M. Ben-Ezra, and Y. Pritch. Omnistereo:Panoramic stereo imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3): , [10] P. Rademacher and G. Bishop. Multiple-center-of-projection images. In Proceedings of ACM SIGGRAPH 98, pages , July [11] S. M. Seitz and J. Kim. The space of all stereo images. IEEE International Journal of Computer Vision, Marr Prize Special Issue, 48(1):21 38, June , 2 [12] D. N. Wood, A. Finkelstein, J. F. Hughes, C. E. Thayer, and D. H. Salesin. Multiperspective panoramas for cel animation. In IEEE Computer Graphics Proceedings(SIGGRAPH 97), pages , August
8 (a) Base(left) panorama generated by previous method [7] (b) Depth map estimated by previous method [7] (c) Base panorama Rl generated by the proposed method (d) Depth map estimated by the proposed method Figure 11. Experimental Results of real scene. 5000(mm) (mm) (a) A top view of the real scene (a) (b) Figure 12. Comparision of estimated cross-section of real scene. (a): estimated result by previous method from the th row of Figure 11(a). (b): estimated result by proposed method from the th row of Figure 11(c). (b) A different view of the real scene Figure 13. Two views of 3D reconstruction result of real scene.
Omni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationAutomatic Disparity Control in Stereo Panoramas (OmniStereo) Λ
Automatic Disparity Control in Stereo Panoramas (OmniStereo) Λ Yael Pritch Moshe Ben-Ezra Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL
More informationA Framework for 3D Pushbroom Imaging CUCS
A Framework for 3D Pushbroom Imaging CUCS-002-03 Naoyuki Ichimura and Shree K. Nayar Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST) Tsukuba,
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationTRADITIONAL stereo reconstruction begins with two calibrated
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 26, NO. 1, JANUARY 2004 45 Stereo Reconstruction from Multiperspective Panoramas Yin Li, Heung-Yeung Shum, Senior Member, IEEE, Chi-Keung
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationMultimodal 3D Panoramic Imaging Using a Precise Rotating Platform
Multimodal 3 Panoramic Imaging Using a Precise Rotating Platform Yufu Qu, Wai L. Khoo, Edgardo Molina and Zhigang Zhu, Senior Member, IEEE Abstract Both panoramic and multimodal imaging are becoming more
More informationOmniStereo: Panoramic Stereo Imaging Λ
OmniStereo: Panoramic Stereo Imaging Λ Shmuel Peleg Moshe Ben-Ezra Yael Pritch School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL Keywords: Stereo Imaging,
More information1R6WHUHR 7KLV'LUHFWLRQ 6WHUHR3RVVLEOHLQ 9LHZLQJ'LUHFWLRQ
Chapter 1 OPTICS FOR OMNISTEREO IMAGING Yael Pritch Moshe Ben-Ezra Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL Abstract Omnistereo
More informationOmni-directional Multi-baseline Stereo without Similarity Measures
Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,
More informationMulti-View Omni-Directional Imaging
Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each
More informationFace Cyclographs for Recognition
Face Cyclographs for Recognition Guodong Guo Department of Computer Science North Carolina Central University E-mail: gdguo@nccu.edu Charles R. Dyer Computer Sciences Department University of Wisconsin-Madison
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationRendering by Manifold Hopping
International Journal of Computer Vision 50(2), 185 201, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Rendering by Manifold Hopping HEUNG-YEUNG SHUM, LIFENG WANG, JIN-XIANG
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationPanoramic stereo vision and depth-assisted edge detection
Panoramic stereo vision and depth-assisted edge detection Aris Economopoulos, Drakoulis Martakos National and Kapodistrian University of Athens, Greece {pathway, martakos}@di.uoa.gr Peter Peer, Franc Solina
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationHigh Accuracy Depth Measurement using Multi-view Stereo
High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel
More informationA Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India
A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationBehavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism
Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More informationMultiple Baseline Stereo
A. Coste CS6320 3D Computer Vision, School of Computing, University of Utah April 22, 2013 A. Coste Outline 1 2 Square Differences Other common metrics 3 Rectification 4 5 A. Coste Introduction The goal
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationIMA Preprint Series # 2105
ROTATING LINE CAMERAS: EPIPOLAR GEOMETRY AND SPATIAL SAMPLING By Fay Huang Shou Kang Wei and Reinhard Klette IMA Preprint Series # 105 ( March 006 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS UNIVERSITY
More informationA Calibration Algorithm for POX-Slits Camera
A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationDynamic Pushbroom Stereo Mosaics for Moving Target Extraction
Dynamic Pushbroom Stereo Mosaics for Moving Target Extraction Paper ID #413 Abstract Our goal is to rapidly acquire panoramic mosaicing maps with information of all 3D (moving) targets as a light aerial
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationImage-Based Rendering
Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome
More informationDept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan
An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi
More informationPublic Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during
More informationRealtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments
Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationA Novel Stereo Camera System by a Biprism
528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel
More informationMinimal Solutions for Generic Imaging Models
Minimal Solutions for Generic Imaging Models Srikumar Ramalingam Peter Sturm Oxford Brookes University, UK INRIA Grenoble Rhône-Alpes, France Abstract A generic imaging model refers to a non-parametric
More informationSeamless Stitching using Multi-Perspective Plane Sweep
Seamless Stitching using Multi-Perspective Plane Sweep Sing Bing Kang, Richard Szeliski, and Matthew Uyttendaele June 2004 Technical Report MSR-TR-2004-48 Microsoft Research Microsoft Corporation One Microsoft
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationThe Space of All Stereo Images
The Space of All Stereo Images Steven M. Seitz Department of Computer Science and Engineering University of Washington, Seattle, WA seitz@cs.washington.edu Abstract A theory of stereo image formation is
More informationMultiview Radial Catadioptric Imaging for Scene Capture
Multiview Radial Catadioptric Imaging for Scene Capture Sujit Kuthirummal Shree K. Nayar Columbia University (c) (d) (e) 3D Texture Reconstruction BRDF Estimation Face Reconstruction Texture Map Acquisition
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationIMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS
IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS Xiaoyong Sun A Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for
More informationOn the Epipolar Geometry of the Crossed-Slits Projection
In Proc. 9th IEEE International Conference of Computer Vision, Nice, October 2003. On the Epipolar Geometry of the Crossed-Slits Projection Doron Feldman Tomas Pajdla Daphna Weinshall School of Computer
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationLecture 14: Computer Vision
CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception
More informationMultiview Reconstruction
Multiview Reconstruction Why More Than 2 Views? Baseline Too short low accuracy Too long matching becomes hard Why More Than 2 Views? Ambiguity with 2 views Camera 1 Camera 2 Camera 3 Trinocular Stereo
More informationOmnivergent Stereo-panoramas with a Fish-eye Lens
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Omnivergent Stereo-panoramas with a Fish-eye Lens (Version 1.) Hynek Bakstein and Tomáš Pajdla bakstein@cmp.felk.cvut.cz, pajdla@cmp.felk.cvut.cz
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry
More informationIntegration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement
Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationMorphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments
Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Nikos Komodakis and Georgios Tziritas Computer Science Department, University of Crete E-mails: {komod,
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationLecture 10: Multi view geometry
Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationRectification and Distortion Correction
Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationPassive 3D Photography
SIGGRAPH 99 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University http:// ://www.cs.cmu.edu/~seitz Talk Outline. Visual Cues 2. Classical Vision Algorithms 3. State of
More information3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY
IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationRendering with Concentric Mosaics
Rendering with Concentric Mosaics Heung-Yeung Shum Microsoft Research, China Li-Wei He Microsoft Research Abstract This paper presents a novel 3D plenoptic function, which we call concentric mosaics. We
More informationStereo with Mirrors*
Stereo with Mirrors* Sameer A. Nene and Shree K. Nayar Department of Computer Science Columbia University New York, NY 10027 Abstract In this paper, we propose the use of mirrors and a single camera for
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 10M Open access books available International authors and editors Downloads Our authors
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More informationStereo Vision Image Processing Strategy for Moving Object Detecting
Stereo Vision Image Processing Strategy for Moving Object Detecting SHIUH-JER HUANG, FU-REN YING Department of Mechanical Engineering National Taiwan University of Science and Technology No. 43, Keelung
More informationThree-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera
Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract
More informationRoom Reconstruction from a Single Spherical Image by Higher-order Energy Minimization
Room Reconstruction from a Single Spherical Image by Higher-order Energy Minimization Kosuke Fukano, Yoshihiko Mochizuki, Satoshi Iizuka, Edgar Simo-Serra, Akihiro Sugimoto, and Hiroshi Ishikawa Waseda
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationBinocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationEfficient Stereo Image Rectification Method Using Horizontal Baseline
Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationAccurate and Dense Wide-Baseline Stereo Matching Using SW-POC
Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationStructure from Small Baseline Motion with Central Panoramic Cameras
Structure from Small Baseline Motion with Central Panoramic Cameras Omid Shakernia René Vidal Shankar Sastry Department of Electrical Engineering & Computer Sciences, UC Berkeley {omids,rvidal,sastry}@eecs.berkeley.edu
More informationCalibration of a fish eye lens with field of view larger than 180
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationConstraints on perspective images and circular panoramas
Constraints on perspective images and circular panoramas Marc Menem Tomáš Pajdla!!"# $ &% '(# $ ) Center for Machine Perception, Department of Cybernetics, Czech Technical University in Prague, Karlovo
More informationAbsolute Scale Structure from Motion Using a Refractive Plate
Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationProject 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)
Announcements Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by email) One-page writeup (from project web page), specifying:» Your team members» Project goals. Be specific.
More informationEfficient Acquisition of Human Existence Priors from Motion Trajectories
Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology
More informationSYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET
SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET Borut Batagelj, Peter Peer, Franc Solina University of Ljubljana Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25,
More informationRecap from Previous Lecture
Recap from Previous Lecture Tone Mapping Preserve local contrast or detail at the expense of large scale contrast. Changing the brightness within objects or surfaces unequally leads to halos. We are now
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More information