In-Situ 3D Indoor Modeler with a Camera and Self-Contained Sensors
|
|
- Mitchell Shepherd
- 5 years ago
- Views:
Transcription
1 In-Situ 3D Indoor Modeler with a Camera and Self-Contained Sensors Tomoya Ishikawa 1, Kalaivani Thangamani 12, Masakatsu Kourogi 1, Andrew P. Gee 3, Walterio Mayol-Cuevas 3, Keechul Jung 4 and Takeshi Kurata 1, 1 National Institute of Advanced Industrial Science and Technology, Japan 2 University of Tsukuba, Japan, 3 University of Bristol, United Kingdom 4 Soongsil University, Korea tomoya-ishikawa@aist.go.jp Abstract. We propose a 3D modeler for supporting in-situ indoor modeling effectively. The modeler allows a user easily to create models from a single photo by interaction techniques taking advantage of features in indoor space and visualization techniques. In order to integrate the models, the modeler provides automatic integration functions using Visual SLAM and pedestrian dead-reckoning (PDR), and interactive tools to modify the result. Moreover, for preventing shortage of texture images to be used for the models, our modeler automatically searches from 3D models created by the user for untextured regions and intuitively visualizes shooting positions to take a photo for the regions. These functions make it possible that the user easily create photorealistic indoor 3D models that have enough textures on the fly. Keywords: 3D indoor modeling, Mixed reality, Virtualized object, Visual SLAM, Pedestrian dead-reckoning, Self-contained sensor 1 Introduction Virtualized real objects made from photos enable virtual environments to enhance reality. This reduces the gap between the real and virtual world for a number of applications such as pre-visualization for online furniture-shopping, walk-through simulation, and so on. In particular, recently, establishing self-localisation methods [1] in indoor environment prompts some attempts in plants and offices for cut-down of unnecessary human movements and prediction of unsafe behaviors based on human traffic lines estimated by the methods. For analyzing these data by visualization, photorealistic indoor 3D models made from real environments are quite useful. In addition, 3D models have come into use for navigation systems [2] not only in outdoor but also indoor environments. In the system, 3D models similar to the real world are expected that they can help the users understand the position and direction intuitively. However, to create photorealistic indoor 3D models is still difficult task except for the professionals. In this research, we propose an interactive indoor 3D modeler which enables a user to create 3D models effectively and intuitively for augmenting the reality of the applications described above.
2 In our proposed modeler, the user creates local models from input photos captured at different positions individually as a unit of modeling process. The local models are effectively integrated into a global model by using Visual SLAM [3] which can create sparse maps of landmarks from video sequences quickly, pedestrian dead-reckoning (PDR) [1] with self-contained sensors, and simple user interaction techniques. The 3D modeling based on a single photo allows the user to easily create photorealistic models which are texture-mapped with high-resolution and high-quality photos compared with video sequences which often contain motion blur and are generally lower-quality than photos. On the other hand, video sequences are suitable for capturing wide areas at a short time. Our modeler complementarily uses those properties for integrating local models. In order to create indoor 3D models easily only from a single photo, our modeler utilizes interaction techniques taking advantages of features of indoor environments and geometric constraints from a photo, and also utilizes visualization techniques [4]. Hereby, it is possible to realize in-situ modeling. For stable and accurate integration of created local models, our modeler provides a two-stage registration function that consists of automatic functions and interaction techniques. Furthermore, the modeler helps the user creates more complete models by automatic un-textured-region detection and view recommendation to capture the regions. This paper is organized as follows. Section 2 describes related works and Section 3, 4, and 5 present the overview of our proposed modeler, local modeling from a single photo, and global modeling for integrating local models respectively. Finally, in Section 6, conclusions and future prospects are summarized. 2 Related Work 3D modeling methods from photos can roughly be classified into two types. One is automatic modeling methods that can automatically reconstruct 3D models without interaction techniques. The other is manual / semi-automatic modeling methods with interaction techniques. A state-of-the-art automatic modeling method has been proposed by Goesele et al. [5]. Their method reconstructs 3D models by using Structure-from-Motion (SfM) [6] which estimates camera parameters and 3D structural information of scenes by using stereo matching to obtain dense 3D shapes from photos. In stereo methods, the scene objects have to be captured at a number of different viewpoints observing overlapped regions for creating accurate 3D models. Therefore, it is time consuming to capture enough photos or video sequences of indoor environments which require inside-out video acquisition. In addition, the computational cost becomes higher when the video sequences are long, and the accuracy often does not meet the practical needs. Manual / semi-automatic modeling methods can produce high-quality models by taking advantage of the users knowledge. Google SketchUp [7] provides sketch interfaces on photos for creating 3D models, but the photos are used only for matching photos and 3D models. The system proposed by Oh et al. [8] utilizes geometric information from an input photo for constraining 3D models on LoS (Lines
3 of Sight) while the user models and edits them. However, this system requires a large amount of time in order to divide the photo into regions. All processes of modeling in automatic methods using SfM are broken down by failures of estimating correspondences among photos. To compensate for the weakness, Debevec et al. [9] have proposed the semi-automatic method that can carry out stable SfM and creation of models consisting of basic primitives by manually adding correspondences between edges on 3D primitives and of images. In this method, however, target objects have to be approximated by the pre-determined basic primitives. Sinha et al. [10] and van den Hengel et al. [11] have proposed interactive 3D modelers using a sparse map and camera parameters estimated by SfM. These systems strongly utilize data from SfM for reducing manpower needed to modeling by assuming SfM must be able to estimate all parameters successfully. Accordingly, when SfM cannot work, all of the modeling processes are broken. Furthermore, in case that the created models have critical un-textured regions, the user has to re-visit the site for capturing texture images of the regions again. A way for preventing such a shortage of texture images is to realize in-situ modeling. In terms of this strategy, Neubert et al. [12] and Bunnum and Mayol- Cuevas [13] have proposed 3D modelers, which can effectively and quickly creates 3D models nearby target objects. However, the created models are simple wireframe models for tracking the objects, so they are not suitable for our target applications. 3 Overview of 3D Indoor Modeler Fig. 1 shows the flowchart of our proposed modeler. In the modeler, as the preprocessing, sparse maps in a global coordinate system are created by Visual SLAM and PDR for easily achieving integration process which describes below. Note that we assume intrinsic camera parameters are estimated by conventional camera calibration methods. Then, the user takes a photo, creates a local model from the photo, and integrates it into the global coordinate system iteratively in order to create the global model as the whole indoor environment model. The local models are created in the local coordinate systems estimated from vanishing points on each input photo. In the local coordinate systems, the user can effectively create 3D models by interaction techniques utilizing features of indoor environments. Furthermore, during the local modeling, the user can easily comprehend the shapes of models being created by viewpoint change, real-time mixed mapping of projective texture mapping (PTM) and depth mapping, and smart secondary view, which is adaptively controlled. For integrating local models, our modeler estimates the transform parameters between the local and global coordinate systems by means of sparse maps of landmarks generated in pre-processing and the result of image feature matching, and also our modeler provides interactive tools for more stable parameter estimation. These functions enable the user to integrate models robustly without massive time consumption. After the integration, the modeler automatically detects un-textured regions in the global model and displays the regions, a recommended viewpoint for taking a texture image of the regions, and the position
4 and direction of the user based on PDR for creating the more complete model. By above supportive functions, the user is able to create indoor 3D models effectively on the fly. Preprocessing Iterative 3D modeling Wide ranging video capture Shooting photo Sparse mapping by Visual SLAM & PDR Output of sparse map & camera parameters Sparse maps & camera parameters Transform parameter estimation Interactive 3D modeling & checking Semi automatic Integration of local & global models Automatic detection of un textured regions View recommendation No Complete global model? Yes Output of global 3D model Fig. 1. Flowchart of our proposed modeler. 4 Local Modeling from Single Photo 4.1 Transform-Parameter Estimation In indoor spaces, floors, walls, and furniture are typically arranged in parallel or perpendicularly with respect to each others. Such features facilitate the modeling by applying an orthogonal coordinate system to floors and walls occupying large areas of a photo. Our proposed modeler utilizes these features and estimates transformation parameters between the local coordinate system and the camera coordinate system by CV-supported simple user interactions. An local coordinate system can be constructed by selecting two pairs of lines that are parallel in the actual 3D room. The modeler first executes the Hough transform to detect lines on the photo, and then displays the lines to the user (Fig. 2-(a,b)). By clicking the displayed lines, the user can provide pairs of parallel lines to the modeler. The 2D intersection points of the selected lines are vanishing points for the photo. From the two vanishing points {e1,e2}and the focal length f of the camera f, the rotation matrix between the local coordinate system and camera coordinate system R is given by the following equation. where x,y T T i 1,2, x,y,f,. After estimating R, the origin of ground plane which corresponds to x-y plane in the local coordinate system is set by the user. This manipulation allows the modeler to determine the translation vector from the local coordinate system to the camera coordinate system. Moreover, the ground plane can be used to place the local model from a photo to the global coordinate system on the assumption that both ground planes in each coordinate system lay on the same plane. When the ground regions are not captured on a photo, the user should set a plane parallel to the ground plane instead.
5 (a) (b) (c) Fig. 2. Transform-parameter estimation from single photo. (a): Input photo, (b): Detected lines by Hough transform, (c): Setting ground plane by user-interaction. 4.2 Interactive tools Assuming that each object in an indoor photo can be modeled with a set of quadrangular and freeform planes, our modeler provides two types of tools to create planes for the user. Quadrangular tool: creates a 3D quadrangular plane by giving the 3D coordinates of the opposing corners through mouse clicks. This tool is suitable for simple objects such as floors, walls, tables, and shelves. Freeform tool: creates a 3D freeform plane by giving a set of 3D points laying on the contour through repeated mouse clicks. This tool is used for more complex objects. For both tools, the depth of the first established point can be given by calculating the intersection between the line of sight passing through the clicked point on the photo and the nearest plane to the optical center of the photo if there exists such an intersection. From the initial viewpoint corresponding to the optical center of the photo, the user can easily understand the correspondence between the photo and the model being created. In particular, in the case of the freeform tool, the interaction to set contour points in 3D planes is same as 2D interaction with the photos, thus the user can create models intuitively. During these interactions, the normal of each plane can be toggled along several default directions such as the x-y, y-z, and z-x planes. The function is especially effective in artificial indoor environments. In addition, the user can create models by means of real-time mixed mapping of PTM and depth mapping as described below. The created models can be translated, deformed, and deleted. In terms of translation and deformation, using the view-volume constraint, the user can control the depth and normal vector without changing 2D shapes projected onto the input photo (Fig. 3). Fig. 3. Normal manipulation with geometric constraint (red-colored plane: plane being manipulated, green line: view-volume).
6 4.3 Visualization for Checking 3D Model Texture-and-Depth Representation. The proposed modeler provides three types of texture-and-depth presentation modes to the user as follows (Fig. 4). Projective texture mapping (PTM): re-projects the texture in the photo onto 3D models and shows the correspondence between the shapes of the models and the textures. Depth mapping: displays the depth from the viewpoint to the models as a grayscale view image and shows the shapes of the model clearly. Mixed mapping: displays the models by mixing PTM and depth mapping and shows a more shape-enhanced view image compared with PTM. These modes of presentation can be rendered by a GPU in real-time not only while viewing the models but also while creating and editing the models. Therefore, it is effective for confirming the shape of models being created. It is often difficult for the user to confirm shapes of models from the initial viewpoint using only PTM. In such cases, the depth mapping or mixed mapping provides good clues to confirm the shapes, to find lack of planes, and to adjust the depth. Fig. 4. Examples of PTM (left), depth mapping (center), and mixed mapping (right). Smart Secondary View. In order to easily understand the shapes of models while they are being constructed, our modeler displays not only a primary view but also a secondary view (Fig. 5). This simultaneous representation helps the user intuitively carry out creation and confirmation of the models. We define the criteria for determining the second view parameters as follows. 1. Update frequency: Viewing parameters should not be changed frequently. 2. Point visibility: The next point which will be created (corresponding to the mouse cursor) must not be occluded by the other planes. 3. Front-side visibility: The view must not show the backside of the target plane. 4. Parallelism: The view should be parallel to the target plane. 5. FoV difference: The view should have a wide field of view (FoV) when the primary view has narrow FoV, and vice versa. The modeler searches for the parameters of the second view based on the above criteria. For a real-time search of viewing parameters, the parameters are sampled coarsely.
7 Plane being created Fig. 5. Close-up of secondary view (left) and primary view (right). 5 Global Modeling from Local Models 5.1 Sparse Mapping Using Visual SLAM and Self-Contained Sensors Video sequences are suitable for capturing wide areas in a short time compared with photos. Our modeler generates sparse maps of indoor environments consisting of a set of point cloud by using Visual SLAM [3] with video sequences and PDR with selfcontained sensors [1]. SfM generally requires high-computational cost and long calculation time to estimate accurate camera motion parameters and a map. Consequently, for smooth insitu modeling operations, our modeler applies Visual SLAM, which can estimate camera motion parameters and a map simultaneously and quickly, to the sparse mapping. Furthermore, measurements of the user s position and direction by PDR can be used for setting the position and direction of photos and video sequences in the global coordinate system and the scale of the maps by simultaneously carrying out it PDR with Visual SLAM. When the modeler handles multiple maps, they are placed in a global coordinate system based on measurements from self-contained sensors. Additionally, the global coordinate system is configured as the Z axis and X-Y plane correspond to the upward vertical direction and the ground plane. Adjustments for rotation, translation, and scaling from initial parameters estimated by PDR can be done with interactive tools. Fig. 6 shows two maps placed in a global coordinate system and the camera paths. These sparse maps are used for semi-automatic functions of integrating local and global models. Fig. 6. Sparse maps by Visual SLAM and PDR in global coordinate system.
8 5.2 Semi-Automatic Integration of Local and Global Models After creating a local model from a photo (Section 4), the local model is integrated into a global model. The integration process consists of automatic functions using Visual SLAM, PDR, and image-feature matching, and interactive tools with which the user gives information needed to integrate manually when the automatic functions fail to estimate transform parameters. This two-stage process enables the user to integrate local models into the global model effectively and reliably. In the automatic functions, the modeler first carries out relocalisation toward the sparse maps using a photo used for local modeling by the relocalisation engine of Visual SLAM [3], and then the modeler takes camera motion parameters and its uncertainties for estimating transform parameters between the local and the global coordinate system. When the relocalisation succeeded for multiple maps, the modeler selects the most reliable camera motion parameters according to the uncertainties and the position and direction from PDR. In the case of failures of relocalisation, the modeler uses a position and direction only from PDR. However, the estimated camera motion parameters by Visual SLAM and PDR are not sufficiently accurate for registration of local models. For more accurate registration, the modeler carries out image-feature matching between two photos; one is used for creating a local model, and the other is used for creating another local model nearest to the target local model. In recent years, robust feature detectors and descriptors such as SIFT [14] and SURF [15] have been proposed, and they are quite useful for such image-feature matching. The 2D point correspondences are converted into 3D point correspondences by using 3D local models created by the user. Then, the transform parameters between the local and global coordinate systems are estimated using RANSAC [16] robustly. After automatic functions, the user confirms whether the local model is correctly integrated or not by viewing the displayed global model. Fig. 7 shows an example of automatic registration using functions described above. The integrated local model overlapping with a door of the global model is accurately registered in this figure. In the case that the accuracy of integration is not enough, the user can give 2D corresponding points manually or give the transform parameters (translation, rotation, and scaling) interactively. Fig. 7. Examples of automatic integration of local model. (left):global model before integration, (right): global model after integration.
9 5.3 Automatic Un-Textured-Region Detection and View Recommendation For preventing shortages of texture images, our modeler automatically detects untextured regions which correspond to occluded regions from all photos and presents a recommended shooting position to capture the region and the user s current position. This prompts the user to re-capture texture images intuitively. Un-textured regions are detected from the integrated global model and intrinsic camera parameters and camera motion parameters of the photos in the global coordinate system. The automatic detector searches for planar regions occluded from all viewpoints of photos and finds a dominant region which has the highest density of the occluded regions by 3D window search. Then, the modeler searches for appropriate viewpoint to capture the region by estimating a cost function and recommends the viewpoint. The cost function is defined from the following criteria. 1. Observability: Viewpoint should capture a whole un-textured region. 2. Easiness: Viewpoint should be below the eye level of the users. 3. Parallelism: View-direction should be parallel to un-textured region. 4. Distance: Viewpoint should be close to un-textured region. When the recommended viewpoint is placed in inapproachable positions, the user can interactively choose another viewpoint rated by the above cost function. After estimating the recommended viewpoint, the user s position and direction are presented onto the monitor with the global model, un-textured region, and recommended viewpoint intuitively (Fig. 8). Recommended viewpoint Self contained sensors (for PDR) (a) User s position & direction Fig. 8. (a) appearance of user confirming un-textured region, (b) detected un-textured region, recommended viewpoint, and the user s position, and (c) updated model. (b) (c) 6 Conclusions We have proposed an in-situ 3D modeler that supports efficient modeling for indoor environments. The modeler provides the interaction techniques by taking advantage of the features that indoor environments inherently have and geometric constraints from a photo for easily creating 3D models, and provides intuitive visualization to confirm the shapes of created models. The created local models are integrated by semi-automatic functions robustly. Furthermore, presenting un-textured regions and recommended viewpoints to capture the regions make it possible that the user create more complete models on the fly.
10 Our near-term work is to evaluate the effectiveness of our proposed interaction techniques, visualization, and supportive functions by creating actual indoor environments. For more effective modeling, we plan to develop functions to optimize a global model overlapped with local models, to suggest initial 3D primitives by machine learning of features in indoor environments, and to inpaint small un-textured regions. References 1. M. Kourogi, N. Sakata, T. Okuma, and T. Kurata, Indoor/Outdoor Pedestrian Navigation with an Embedded GPS/RFID/Self-contained Sensor System, In Proc. of 16th Int. Conf. on Artificial Reality and Telexistence (ICAT2006), pp , T. Okuma, M. Kourogi, N. Sakata, and T. Kurata, A Pilot User Study on 3-D Museum Guide with Route Recommendation Using a Sustainable Positioning System, In Proc. of Int. Conf. on Control, Automation and Systems (ICCAS2007), pp , A. P. Gee, D. Chekhlov, A. Calway, and W. Mayol-Cuevas, Discovering Higher Level Structure in Visual SLAM, IEEE Trans. on Robotics, vol.26, no.5, pp , T. Ishikawa, K. Thangamani, T. Okuma, K. Jung, and T. Kurata, Interactive Indoor 3D Modeling from a Single Photo with CV Support, In Electronic Proc. of IWUVR2009, M. Goesele, N. Snavely, B. Curless, H. Hoppe, and S. M. Seitz, Multi-View Stereo for Community Photo Collections, In Proc. of Int. Conf. on Computer Vision (ICCV2007), pp.14-20, N. Snavely, S. M. Seitz, and R. Szeliski, Modeling the World from Internet Photo Collections, Int. Journal of Computer Vision, vol.80, pp , Google SketchUp, 8. B. M. Oh, M. Chen, J. Dorsey, and F. Durand, Image-Based Modeling and Photo Editing, In Proc. of SIGGRAPH, pp , P. E. Debevec, C. J. Taylor, and J. Malik, Modeling and Rendering Architecture from Photographs : A Hybrid Geometry- and Image-Based Approach, In Proc. of SIGGRAPH, pp.11-20, S. N. Sinha, D. Steedly, R. Szeliski, M. Agrawala, and M. Pollefeys, Interactive 3D Architectural Modeling from Unordered Photo Collections, ACM Trans. on Graphics, vol.27, no.5, article.5, A. van den Hengel, A. Dick, T. Thormahlen, B. Ward, and P. H. S. Torr, VideoTrace: Rapid Interactive Scene Modelling from Video, ACM Trans. on Graphics, vol.26, no.3, article.86, J. Neubert, J. Pretlove, and T. Drummond, Semi-Autonomous Generation of Appearancebased Edge Models from Image Sequences, In Proc. of IEEE/ACM Int. Symp. on Mixed and Augmented Reality, pp.79-89, P. Bunnum and W. Mayol-Cuevas, OutlinAR: An Assisted Interactive Model Building System with Reduced Computational Effort, In Proc. of IEEE/ACM Int. Symp.on Mixed and Augmented Reality, pp.61-64, D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. Journal of Computer Vision, vol.60, no.2, pp , H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, Speeded-Up Robust Features (SURF), Computer Vision and Image Understanding, vol.110, no.3, pp , M. A. Fischler and R. C. Bolles, Random Sample Consensus: A Paradigm for model fitting with applications to image analysis and automated cartography, Communication of the ACM, vol.24, no.6, pp , 1981.
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE
SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission
More informationFig. 3 : Overview of the localization method. Fig. 4 : Overview of the key frame matching.
ICServ2014, 022, v3: Mixed Reality N... 1 Mixed Reality Navigation on a Tablet Computer for Supporting Machine Maintenance in Wide-area Indoor Environment Koji Makita 1, Thomas Vincent 2, Masakatsu Kourogi
More informationPhoto Tourism: Exploring Photo Collections in 3D
Photo Tourism: Exploring Photo Collections in 3D SIGGRAPH 2006 Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 2006 2006 Noah Snavely Noah Snavely Reproduced with
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationShape from Textures Phenomenon for 3D Modeling Applications with Big Data Images
Shape from Textures Phenomenon for 3D Modeling Applications with Big Data Images Kalaivani Thangamani, Takashi Okuma, Ryosuke Ichikari and Takeshi Kurata Service Sensing, Assimilation, and Modeling Research
More informationVisualization 2D-to-3D Photo Rendering for 3D Displays
Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information
More informationHomographies and RANSAC
Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationReal-Time Model-Based SLAM Using Line Segments
Real-Time Model-Based SLAM Using Line Segments Andrew P. Gee and Walterio Mayol-Cuevas Department of Computer Science, University of Bristol, UK {gee,mayol}@cs.bris.ac.uk Abstract. Existing monocular vision-based
More informationAR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor
AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information
More informationPhoto Tourism: Exploring Photo Collections in 3D
Click! Click! Oooo!! Click! Zoom click! Click! Some other camera noise!! Photo Tourism: Exploring Photo Collections in 3D Click! Click! Ahhh! Click! Click! Overview of Research at Microsoft, 2007 Jeremy
More informationCamera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006
Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationOverview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers
Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationEstimating Camera Position And Posture by Using Feature Landmark Database
Estimating Camera Position And Posture by Using Feature Landmark Database Motoko Oe 1, Tomokazu Sato 2 and Naokazu Yokoya 2 1 IBM Japan 2 Nara Institute of Science and Technology, Japan Abstract. Estimating
More informationFast Natural Feature Tracking for Mobile Augmented Reality Applications
Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Extrinsic camera calibration method and its performance evaluation Jacek Komorowski 1 and Przemyslaw Rokita 2 arxiv:1809.11073v1 [cs.cv] 28 Sep 2018 1 Maria Curie Sklodowska University Lublin, Poland jacek.komorowski@gmail.com
More informationInternational Journal for Research in Applied Science & Engineering Technology (IJRASET) A Review: 3D Image Reconstruction From Multiple Images
A Review: 3D Image Reconstruction From Multiple Images Rahul Dangwal 1, Dr. Sukhwinder Singh 2 1 (ME Student) Department of E.C.E PEC University of TechnologyChandigarh, India-160012 2 (Supervisor)Department
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationarxiv: v1 [cs.cv] 1 Jan 2019
Mapping Areas using Computer Vision Algorithms and Drones Bashar Alhafni Saulo Fernando Guedes Lays Cavalcante Ribeiro Juhyun Park Jeongkyu Lee University of Bridgeport. Bridgeport, CT, 06606. United States
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationApplication questions. Theoretical questions
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list
More informationEVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION
Archives of Photogrammetry, Cartography and Remote Sensing, Vol. 22, 2011, pp. 285-296 ISSN 2083-2214 EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION Michał Kowalczyk 1 1 Department
More informationA NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION
A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu
More informationBuilding Reliable 2D Maps from 3D Features
Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationDETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION
DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,
More informationImage correspondences and structure from motion
Image correspondences and structure from motion http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 20 Course announcements Homework 5 posted.
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationA System of Image Matching and 3D Reconstruction
A System of Image Matching and 3D Reconstruction CS231A Project Report 1. Introduction Xianfeng Rui Given thousands of unordered images of photos with a variety of scenes in your gallery, you will find
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More information3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots
3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationAccurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion
007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,
More informationEpipolar geometry-based ego-localization using an in-vehicle monocular camera
Epipolar geometry-based ego-localization using an in-vehicle monocular camera Haruya Kyutoku 1, Yasutomo Kawanishi 1, Daisuke Deguchi 1, Ichiro Ide 1, Hiroshi Murase 1 1 : Nagoya University, Japan E-mail:
More informationEstimation of Camera Pose with Respect to Terrestrial LiDAR Data
Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationAUTOMATED 3D MODELING OF URBAN ENVIRONMENTS
AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS Ioannis Stamos Department of Computer Science Hunter College, City University of New York 695 Park Avenue, New York NY 10065 istamos@hunter.cuny.edu http://www.cs.hunter.cuny.edu/
More informationDepth Propagation with Key-Frame Considering Movement on the Z-Axis
, pp.131-135 http://dx.doi.org/10.1457/astl.014.47.31 Depth Propagation with Key-Frame Considering Movement on the Z-Axis Jin Woo Choi 1, Taeg Keun Whangbo 1 Culture Technology Institute, Gachon University,
More information3D Digitization of a Hand-held Object with a Wearable Vision Sensor
3D Digitization of a Hand-held Object with a Wearable Vision Sensor Sotaro TSUKIZAWA, Kazuhiko SUMI, and Takashi MATSUYAMA tsucky@vision.kuee.kyoto-u.ac.jp sumi@vision.kuee.kyoto-u.ac.jp tm@i.kyoto-u.ac.jp
More informationA Novel Algorithm for Pose and illumination Invariant Image Matching
A Novel Algorithm for Pose and illumination Invariant Image Matching Abstract: N. Reddy Praveen M.Tech(DECS), AITS, Rajampet, Andhra Pradesh, India. The challenges in local-feature-based image matching
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationMidterm Examination CS 534: Computational Photography
Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)
More informationInstance-level recognition part 2
Visual Recognition and Machine Learning Summer School Paris 2011 Instance-level recognition part 2 Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationCapturing and View-Dependent Rendering of Billboard Models
Capturing and View-Dependent Rendering of Billboard Models Oliver Le, Anusheel Bhushan, Pablo Diaz-Gutierrez and M. Gopi Computer Graphics Lab University of California, Irvine Abstract. In this paper,
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationActive Wearable Vision Sensor: Recognition of Human Activities and Environments
Active Wearable Vision Sensor: Recognition of Human Activities and Environments Kazuhiko Sumi Graduate School of Informatics Kyoto University Kyoto 606-8501, Japan sumi@vision.kuee.kyoto-u.ac.jp Masato
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationSpecular 3D Object Tracking by View Generative Learning
Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationAppearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization
Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationLarge Scale 3D Reconstruction by Structure from Motion
Large Scale 3D Reconstruction by Structure from Motion Devin Guillory Ziang Xie CS 331B 7 October 2013 Overview Rome wasn t built in a day Overview of SfM Building Rome in a Day Building Rome on a Cloudless
More informationInstance-level recognition II.
Reconnaissance d objets et vision artificielle 2010 Instance-level recognition II. Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique, Ecole Normale
More information3D Editing System for Captured Real Scenes
3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:
More informationImage-based Modeling and Rendering: 8. Image Transformation and Panorama
Image-based Modeling and Rendering: 8. Image Transformation and Panorama I-Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung Univ, Taiwan Outline Image transformation How to represent the
More informationAnnouncements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test
Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying
More informationViewpoint Invariant Features from Single Images Using 3D Geometry
Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie
More informationLocal features and image matching. Prof. Xin Yang HUST
Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source
More informationFrom Structure-from-Motion Point Clouds to Fast Location Recognition
From Structure-from-Motion Point Clouds to Fast Location Recognition Arnold Irschara1;2, Christopher Zach2, Jan-Michael Frahm2, Horst Bischof1 1Graz University of Technology firschara, bischofg@icg.tugraz.at
More informationLocalization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera
Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera Yusuke Nakazato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science
More informationRecognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17
Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images
More information3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection
3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection Peter Eisert, Eckehard Steinbach, and Bernd Girod Telecommunications Laboratory, University of Erlangen-Nuremberg Cauerstrasse 7,
More informationDETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS
DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural
More informationPlayer Viewpoint Video Synthesis Using Multiple Cameras
Player Viewpoint Video Synthesis Using Multiple Cameras Kenji Kimura *, Hideo Saito Department of Information and Computer Science Keio University, Yokohama, Japan * k-kimura@ozawa.ics.keio.ac.jp, saito@ozawa.ics.keio.ac.jp
More informationMorphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments
Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Nikos Komodakis and Georgios Tziritas Computer Science Department, University of Crete E-mails: {komod,
More informationEnsemble of Bayesian Filters for Loop Closure Detection
Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information
More informationAugmenting Reality with Projected Interactive Displays
Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection
More informationMulti-view reconstruction for projector camera systems based on bundle adjustment
Multi-view reconstruction for projector camera systems based on bundle adjustment Ryo Furuakwa, Faculty of Information Sciences, Hiroshima City Univ., Japan, ryo-f@hiroshima-cu.ac.jp Kenji Inose, Hiroshi
More informationStereo and Epipolar geometry
Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka
More informationA REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES
A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES Alexander Ladikos, Selim Benhimane, Nassir Navab Department of Computer Science, Technical University of Munich, Boltzmannstr.
More informationPERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO
Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More information3D Reconstruction from Multi-View Stereo: Implementation Verification via Oculus Virtual Reality
3D Reconstruction from Multi-View Stereo: Implementation Verification via Oculus Virtual Reality Andrew Moran MIT, Class of 2014 andrewmo@mit.edu Ben Eysenbach MIT, Class of 2017 bce@mit.edu Abstract We
More informationIndoor-Outdoor Navigation System for Visually-Impaired Pedestrians: Preliminary Evaluation of Position Measurement and Obstacle Display
Indoor-Outdoor Navigation System for Visually-Impaired Pedestrians: Preliminary Evaluation of Position Measurement and Obstacle Display Takeshi KURATA 12, Masakatsu KOUROGI 1, Tomoya ISHIKAWA 1, Yoshinari
More informationLOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS
8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The
More informationAn Image Based 3D Reconstruction System for Large Indoor Scenes
36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG
More informationAutomatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera
Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,
More informationAutomatic generation of 3-d building models from multiple bounded polygons
icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Automatic generation of 3-d building models from multiple
More informationEpipolar Geometry CSE P576. Dr. Matthew Brown
Epipolar Geometry CSE P576 Dr. Matthew Brown Epipolar Geometry Epipolar Lines, Plane Constraint Fundamental Matrix, Linear solution + RANSAC Applications: Structure from Motion, Stereo [ Szeliski 11] 2
More information3D model search and pose estimation from single images using VIP features
3D model search and pose estimation from single images using VIP features Changchang Wu 2, Friedrich Fraundorfer 1, 1 Department of Computer Science ETH Zurich, Switzerland {fraundorfer, marc.pollefeys}@inf.ethz.ch
More informationAutomatic Photo Popup
Automatic Photo Popup Derek Hoiem Alexei A. Efros Martial Hebert Carnegie Mellon University What Is Automatic Photo Popup Introduction Creating 3D models from images is a complex process Time-consuming
More informationNinio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29,
Ninio, J. and Stevens, K. A. (2000) Variations on the Hermann grid: an extinction illusion. Perception, 29, 1209-1217. CS 4495 Computer Vision A. Bobick Sparse to Dense Correspodence Building Rome in
More informationMultiple View Geometry
Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V
More informationInteraction Using Nearby-and-Far Projection Surfaces with a Body-Worn ProCam System
Interaction Using Nearby-and-Far Projection Surfaces with a Body-Worn ProCam System Takeshi Kurata 1 Nobuchika Sakata 13 Masakatsu Kourogi 1 Takashi Okuma 1 Yuichi Ohta 2 1 AIST, Japan 2 University of
More informationTopics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester
Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic
More informationCS 4758: Automated Semantic Mapping of Environment
CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic
More informationNoah Snavely Steven M. Seitz. Richard Szeliski. University of Washington. Microsoft Research. Modified from authors slides
Photo Tourism: Exploring Photo Collections in 3D Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research 2006 2006 Noah Snavely Noah Snavely Modified from authors slides
More informationDetecting Multiple Symmetries with Extended SIFT
1 Detecting Multiple Symmetries with Extended SIFT 2 3 Anonymous ACCV submission Paper ID 388 4 5 6 7 8 9 10 11 12 13 14 15 16 Abstract. This paper describes an effective method for detecting multiple
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationScene Modeling for a Single View
Scene Modeling for a Single View René MAGRITTE Portrait d'edward James with a lot of slides stolen from Steve Seitz and David Brogan, 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Classes
More informationAccurate and Dense Wide-Baseline Stereo Matching Using SW-POC
Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationPersonal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery
Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,
More informationData-driven Depth Inference from a Single Still Image
Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information
More information