Procedia - Social and Behavioral Sciences 97 ( 2013 ) Objects tracking from natural features in mobile augmented reality

Size: px
Start display at page:

Download "Procedia - Social and Behavioral Sciences 97 ( 2013 ) Objects tracking from natural features in mobile augmented reality"

Transcription

1 Available online at ScienceDirect Procedia - Social and Behavioral Sciences 97 ( 2013 ) The 9 th International Conference on Cognitive Science Objects tracking from natural features in mobile augmented reality Edmund Ng Giap Weng, Rehman Ullah Khan*, Shahren Ahmad Zaidi Adruce, Oon Yin Bee Faculty of Cognitive Sciences and Human Development, Universiti Malaysia Sarawak Kota Samarahan, Sarawak, Malaysia Abstract Real world objects are recognized by tracking less and tracking based techniques. Mobile augmented reality browsers are tracking less systems, which acquires location data using global positioning system and provide information in the form of maps or web links. Tracking based techniques recognize objects through markers or directly real world objects without markers. Marker based systems actually track the markers not the real objects and therefore, these approaches hides the reality. Markerless (direct real object tracking) systems use client-server architecture. However, these are affected by network latency. The Smartphone is capable to recognize and track real world objects without any server and marker. It can guide the users about their location and also provide information in a convenient way. Therefore, an improved algorithm for tracking real world objects through natural features was formulated. The modified version of speed up robust features (SURF) was used for features extraction from live mobile camera image and recognition. The pose matrix from extracted features was calculated by Homography. The adapted algorithm was tested in a mobile AR-prototype application using iphone. It was found from the results that the formulated algorithm recognized and tracked the real world objects from natural features in speedy, easy and convenient way 2013 The Authors. Published by by Elsevier Ltd. Ltd. Open access under CC BY-NC-ND license. Selection and/or peer-review under under responsibility of the of Universiti the Universiti Malaysia Malaysia Sarawak. Sarawak Keywords: Augmented reality; natural features; marker-less; outdoor; image recognition; tracking 1. Introduction Augmented reality using emerging technologies such as global positioning system (GPS), accelerometer, gyroscope, compass and mobile vision, provides a best opportunity to Smartphone users to explore their surroundings. The real world objects can be recognized by using marker based and marker-less augmented reality systems. Mostly, the previous developers used markers based augmented reality systems. However, those systems actually hide the reality and it was also difficult to keep the markers everywhere. Furthermore, the previous markerless approaches use client-server architecture, which is drastically affected by network latency. The markers based augmented reality was applied in different fields like medical visualization, maintenance and repair, navigation and entertainment. However, the markers are not suitable for outdoor mobile augmented reality because markers hide the reality and need to keep everywhere [1]. Its range is also very limited and end- Marker-less natural features based approach [2], can recognize real world objects, such as sights, buildings, and * Corresponding author. Tel.: ; fax: address: rehmanphdar@gmail.com The Authors. Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and/or peer-review under responsibility of the Universiti Malaysia Sarawak. doi: /j.sbspro

2 754 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) living beings and overcome these limitations. Robust feature descriptors such as SIFT [3], SURF [4], and GLOH [5] are most suitable for applications such as image recognition [6] and image registration [7]. These descriptors are stable under different viewpoints and lighting conditions. These descriptors are ideally suited for searching image databases because they are representing feature points as high-dimensional vectors. However, tracking from natural features is a complex problem and usually performed on a remote server [8] [9] [10]. It is therefore a challenging task to use natural feature based tracking in mobile augmented reality applications without server. However, the mobile phones are very inexpensive, attractive targets for outdoor AR. The improvements in Smartphone capabilities and great potential of computer/mobile vision motivated us to design marker-less natural feature based tracking algorithm for mobile augmented reality Related Work Image processing methods can be used in vision-based tracking to calculate the camera pose relative to real world objects like closed loop systems which correct errors dynamically [11]. This is the most active area of tracking research in computer vision methods. The tracking techniques in computer vision can be divided into two classes: feature-based and model-based [12]. Feature-based methods find a correspondence between 2D image features and their 3D world frame coordinates. To calculate camera pose, 3D coordinates of the features are then projected into the observed 2D image coordinates and distance to their corresponding 2D features is minimized [13]. Marker tracking methods can be used to calculate camera pose in real time from artificial markers. The popular ARToolKit library [14] [15] introduced a method for finding the 3D coordinates of the 4 corners of a square marker and [16] introduced an algorithm for calculating camera pose from known features. The key approach of combining pattern recognition and pose calculation was introduced by [17]. After comparing several leading approaches by [18], no new general marker based systems presented, although some researchers explored tracking from LEDs [19]. Tracking from non-square visual markers was introduced by [20] using ring shaped fiducial markers while [21] proposed circular shaped marker clusters with various parameters, i.e., number of markers, height, and radius with single camera tracking from its topology. Circular 2D bar-coded fiducial system proposed by [19] for vision-inertial tracking. It offered a high information density and sub-pixel accuracy of centroid location. Camera pose can also be determined from natural features, such as points, lines, edges or textures. This research was introduced by [22] presenting a paper at IWAR 98 showing how natural features can be used to extend tracking beyond artificial features. This system was able to dynamically acquire additional natural features after calculating camera pose from known visual features to continuously update the pose calculation. In this way they could provide robust tracking even when the original fiducials were no longer in view. Another tracking technique in computer vision is modelbased tracking. This technique uses a model of features of tracked objects such as a CAD model or 2D template of the object based on the distinguishable features. This model was first presented at ISMAR by [23], who used a visual serving approach adapted from robotics to calculate camera pose from a range of model features (lines, circles, cylinders and spheres). They found that that knowledge about the scene improved tracking robustness and performance by being able to predict hidden movement of the object and reduce the effects of outlier data. Model-based trackers are mostly using edges or lines as a feature for pose calculation. A well known approach is to look for strong gradients in the image around a first estimation of the object pose, without exploring the contours [24]. A CAD model was created by hand for piecewise parametric representation of complex objects such as straight lines, spheres, and cylinders by [23]. A real time model-based tracking approach was proposed by [13] where an adaptive system was adopted to improve the robustness and efficiency. Texture was used by [25] who proposed a textured 3D model-based hybrid tracking system combined with edge information, dynamically determined at runtime by performing edge detection. Edge information and feature points ware combined by [26] which let the tracker handle both textured and un-textured objects, and was more stable and less prone to drift. Likewise, [12] proposed a model-based hybrid monocular vision system, combining edge extraction and texture analysis to obtain a more robust and accurate pose computation. Camera pose estimation is the primary technical challenge of Mobile AR. In order to render virtual objects aligned with the real world, the virtual camera must move and rotate in conjunction with the real camera. The quality of the final results is limited by the accuracy of the virtual camera's position.

3 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) The drawback to treating tracking and recognition as completely separate processes is that recognition can be slow. A long time needs to be applied to the first frame, after which the object may have moved too far to be tracked. This research managed to combine tracking and recognition processes. The already extracted image features are used to calculate homography and then find the model-view matrix. In this way it speeds up the system performance and can work in real time. 2. Methodology The modified version of SURF (Speed up Robust Features) algorithm was used for extraction of natural features and tracking of objects [2]. The projection (pose) matrix was calculated from the extracted features using homography techniques, which contain the coordinates of virtual objects for augmentation purposes Calculation of Pose Matrix Pose matrix was calculated by considering a set of points in the first image of a sequence with homogeneous world coordinates (xi, yi, zi). The world coordinates were linked to a set of points in the screen coordinates to find out the object location on the screen. The association between world and screen coordinates is termed as homography. The homography of coordinates is given in the following matrix. (1) coordinates and that each point on the screen is treated as a ray through the camera center. The actual image position was found by dividing the first and second components by the third. The homography is therefore a simple linear transformation of the rays passing through the camera center and it is actually a combination of rotations, scaling, (2) (3) where defines the -th element of H equations: In matrix form we have (4) (5) (6) where is a 9 element vector containing the elements of H. Therefore, with four non-collinear points, it was solved for all the elements of H as follows:

4 756 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) (7) The solution h is thus the null-space of the 8 9 matrix A. It was solved using singular value decomposition method. However, homography, H, cannot be directly used to augment virtual 3D objects into the image. Since, the Z component from pattern space is assumed to always be zero. The intrinsic parameters of camera can be shown by the following matrix: (8) The offline camera calibration techniques proposed by [27] were used for calculating intrinsic parameters and distortion coefficients. The determined intrinsic parameters were f_x = , f_y = , c_x = and c_y = Similarly, the computed distortion coefficients were , , e-03, e-03 and Homography was calculated from the calculated values of intrinsic parameters and distortion coefficients using open CV [28]. The homography matrix was decomposed in rotation, scaling and translation matrix for translation, rotation and scaling of objects. The pose matrix was formulated by the multiplication of (Scale * Rotation) * Translation. The formulated pose matrix was used for registration between real objects and virtual information Implementation of Proposed Algorithm The proposed algorithm was implemented using Objective C, Open surf library [29] and Open CV 2.2 [28]. It was validated using mobile augmented reality prototype and tested with iphone 4s model. The mobile AR prototype recursively use camera frames to extract features and recognize the real world object. Then, the extracted features were used for calculation of pose matrix using homography. The flow chart of the proposed algorithm is shown in Figure Results and Discussion The proposed algorithm was tested by standard image data set [30]. The standard image data set contains sequences of images which exhibit real geometric and photometric transformations, such as scaling, rotation, illumination and JPEG compression. Each image of the data set was tested by the proposed algorithm to verify the calculation of pose matrix. The results of pose matrix for different images are shown in Figure 2 to 8. Figure 2 demonstrates a 3D model of apple over zoomed grafitty after recognition and calculating central pose. Figure 3 shows 3D model over a normal image of bikes, where as Figure 4 illustrates over a rotated image of boat. Figure 5 exhibits 3D model of apple over a bright image of Leuven and Figure 6 expresses over a blur image of trees. A compressed image of UBC is illustrated in Figure 7, while different views of images are given in Figure 8. The pose matrix of proposed algorithm was validated with standard image data set. It was proved to be efficient and practicable in marker-less mobile augmented reality.

5 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) Mobile Camera Augment camera view Video Frame Calculate Camera Pose No Recognition Process Yes Recogniz Fig. 1, Camera recognition cycle. Fig. 2. 3D model of apple over zoomed grafitty. Fig. 3. 3D model of apple over a normal image of bikes.

6 758 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) Fig. 4. 3D model of apple over a rotated image of boat. Fig. 5. 3D model of apple over a bright image of Leuven. Fig. 6. 3D model of apple over a blur image of trees.

7 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) Fig. 7. 3D model of apple over a compressed image of UBC. Fig. 8. 3D model of apple over a changed view image of wall. 4. Conclusion An improved version of the SURF algorithm was used for features extraction from live mobile camera image and recognition of real world objects. Homography techniques were used to determine the pose matrix from extracted features. Different characteristics of virtual objects such as rotation, scaling and translation were controlled by calculated pose matrix. The adapted algorithms were tested in a mobile AR-prototype application using iphone with standard image data set. The proposed algorithm was able to calculate the central pose and display a 3D model over each image from standard data set. The adapted algorithm was found to be efficient and practicable in marker-less mobile augmented reality. However, its speed and accuracy can also be improved by replacing SURF algorithm with another computationally efficient algorithm. Acknowledgments This work was supported by University of Malaysia Sarawak (UNIMAS). The authors would like to thank UNIMAS for this support. References [1] Reitmayr, G. and D. Schmalstieg. Location based applications for mobile augmented reality. in 4th Australasian User Interface Conference Adelaide, Australia: Australian Computer Society, Inc.

8 760 Edmund Ng Giap Weng et al. / Procedia - Social and Behavioral Sciences 97 ( 2013 ) [2] Edmund Ng Gaip, W., et al., A framework for Outdoor Mobile Augmented Reality. IJCSI International Journal of Computer Science Issues, ( 2): p [3] Lowe, D.G., Distinctive image features from scale-invariant keypoints. International journal of computer vision, (2): p [4] Bay, H., T. Tuytelaars, and L. Van Gool, Surf: Speeded up robust features. Computer Vision ECCV 2006, 2006: p [5] Mikolajczyk, K. and C. Schmid, A performance evaluation of local descriptors. IEEE transactions on pattern analysis and machine intelligence, ( 10): p [6] Darrell, K.G.a.T. The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features. in ICCV [7] Lowe, M.B.a.D., Automatic Panoramic Image Stitching Using Invariant Features. International Journal of Computer Vision, : p [8] M. Pielot, N.H., C. Nickel, C. Menke, S. Samadi, and S. Boll Evaluation of Camera Phone Based Interaction to Access Information Related to Posters. in Mobile Interaction with the Real World [9] ARCAMA-3D A Context-Aware Augmented Reality Mobile Platform for Environmental Discovery. Web and Wireless Geographical Information Systems, 2012: p [10] Reitmayr, G. and D. Schmalstieg. Data management strategies for mobile augmented reality. in STARS Tokyo, Japan. [11] Bajura, M. and U. Neumann. Dynamic registration correction in augmented-reality systems. in Computer Graphics and Applications. 1995: IEEE. [12] Pressigout, M. and E. Marchand. Hybrid tracking algorithms for planar and non-planar structures subject to illumination changes. in. 2006: IEEE Computer Society. [13] Wuest, H., F. Vial, and D. Strieker. Adaptive line tracking with multiple hypotheses for augmented reality. in. 2005: IEEE. [14] Kato, H. and M. Billinghurst. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. in. 1999: IEEE. [15] Stricker, D., G. Klinker, and D. Reiners. A fast and robust line-based optical tracker for augmented reality applications. in [16] Park, J., B. Jiang, and U. Neumann. Vision-based pose computation: robust and accurate augmented reality tracking. in : IEEE. [17] Rekimoto, J. Matrix: A realtime object identification and registration method for augmented reality. in. 1998: IEEE. [18] Zhang, X., S. Fronz, and N. Navab. Visual marker detection and decoding in AR systems: A comparative study. in. 2002: IEEE Computer Society. [19] Naimark, L. and E. Foxlin. Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker. in. 2002: IEEE Computer Society. [20] Cho, Y., J. Lee, and U. Neumann. A multi-ring color fiducial system and an intensity-invariant detection method for scalable fiducialtracking augmented reality. in International Workshop on Augmented Reality (IWAR 98) A.K. Peters, Natick, Mass: Citeseer. [21] Vogt, S., et al. Single camera tracking of marker clusters: Multiparameter cluster optimization and experimental verification. in ISMAR. 2002: IEEE. [22] Park, J., S. You, and U. Neumann. Natural feature tracking for extendible robust augmented realities. in [23] Comport, A.I., É. Marchand, and F. Chaumette. A real-time tracker for markerless augmented reality. in. 2003: IEEE Computer Society. [24] Fua, P. and V. Lepetit, Vision based 3D tracking and pose estimation for mixed reality, in Emerging Technologies of Augmented Reality: Interfaces and Design, M.B. M. Haller, B. H. Thomas Eds, Editor. 2005, Idea Group, Hershey. p [25] Reitmayr, G. and T.W. Drummond. Going out: robust model-based tracking for outdoor augmented reality. in. 2006: IEEE. [26] Vacchetti, L., V. Lepetit, and P. Fua. Combining edge and texture information for real-time accurate 3d camera tracking. in ISMAR. 2004: IEEE. [27] kronick, Offline camera calibration for iphone/ipad-- or any camera, really, in Urban Augmented Reality [28] Intel. OpenCV Computer Vision Library [cited August]; Available from: [29] Evans, C. The OpenSURF Computer Vision Library [cited /4/2013]; Available from: [30] Mikolajczyk K., S.C. Affine Covariant Features [cited September]; Available from:

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Analysis of ARToolKit Fiducial Markers Attributes for Robust Tracking

Analysis of ARToolKit Fiducial Markers Attributes for Robust Tracking 1 st International Conference of Recent Trends in Information and Communication Technologies Analysis of ARToolKit Fiducial Markers Attributes for Robust Tracking Ihsan Rabbi 1,2,*, Sehat Ullah 1, Muhammad

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera

Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera Yusuke Nakazato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science

More information

An Initialization Tool for Installing Visual Markers in Wearable Augmented Reality

An Initialization Tool for Installing Visual Markers in Wearable Augmented Reality An Initialization Tool for Installing Visual Markers in Wearable Augmented Reality Yusuke Nakazato, Masayuki Kanbara, and Naokazu Yokoya Nara Institute of Science and Technology, 8916 5 Takayama, Ikoma,

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Estimating Camera Position And Posture by Using Feature Landmark Database

Estimating Camera Position And Posture by Using Feature Landmark Database Estimating Camera Position And Posture by Using Feature Landmark Database Motoko Oe 1, Tomokazu Sato 2 and Naokazu Yokoya 2 1 IBM Japan 2 Nara Institute of Science and Technology, Japan Abstract. Estimating

More information

A Facade Tracking System for Outdoor Augmented Reality

A Facade Tracking System for Outdoor Augmented Reality A Facade Tracking System for Outdoor Augmented Reality José F. Martins 1,2 Jorge A. Silva 1,3 1 FEUP A. Augusto de Sousa 1,4 2 ISMAI, 3 INEB, 4 INESC Porto R. Dr. Roberto Frias 4200-465 Porto, Portugal

More information

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW HONGBO LI, MING QI AND 3 YU WU,, 3 Institute of Web Intelligence, Chongqing University of Posts and Telecommunications,

More information

Object Recognition with Invariant Features

Object Recognition with Invariant Features Object Recognition with Invariant Features Definition: Identify objects or scenes and determine their pose and model parameters Applications Industrial automation and inspection Mobile robots, toys, user

More information

A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES

A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES A REAL-TIME TRACKING SYSTEM COMBINING TEMPLATE-BASED AND FEATURE-BASED APPROACHES Alexander Ladikos, Selim Benhimane, Nassir Navab Department of Computer Science, Technical University of Munich, Boltzmannstr.

More information

SINGLE IMAGE AUGMENTED REALITY USING PLANAR STRUCTURES IN URBAN ENVIRONMENTS. Eric McClean, Yanpeng Cao, John McDonald

SINGLE IMAGE AUGMENTED REALITY USING PLANAR STRUCTURES IN URBAN ENVIRONMENTS. Eric McClean, Yanpeng Cao, John McDonald 2011 Irish Machine Vision and Image Processing Conference SINGLE IMAGE AUGMENTED REALITY USING PLANAR STRUCTURES IN URBAN ENVIRONMENTS Eric McClean, Yanpeng Cao, John McDonald Department of Computer Science,

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

Prof. Feng Liu. Spring /26/2017

Prof. Feng Liu. Spring /26/2017 Prof. Feng Liu Spring 2017 http://www.cs.pdx.edu/~fliu/courses/cs510/ 04/26/2017 Last Time Re-lighting HDR 2 Today Panorama Overview Feature detection Mid-term project presentation Not real mid-term 6

More information

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt. Section 10 - Detectors part II Descriptors Mani Golparvar-Fard Department of Civil and Environmental Engineering 3129D, Newmark Civil Engineering

More information

Target Marker: A Visual Marker for Long Distances and Detection in Realtime on Mobile Devices

Target Marker: A Visual Marker for Long Distances and Detection in Realtime on Mobile Devices Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 015) Barcelona, Spain, July 13-14, 015 Paper No. 339 Target Marker: A Visual Marker for Long Distances

More information

Keypoint Recognition with Two-Stage Randomized Trees

Keypoint Recognition with Two-Stage Randomized Trees 1766 PAPER Special Section on Machine Vision and its Applications Keypoint Recognition with Two-Stage Randomized Trees Shoichi SHIMIZU a) and Hironobu FUJIYOSHI b), Members SUMMARY This paper proposes

More information

arxiv: v1 [cs.cv] 1 Jan 2019

arxiv: v1 [cs.cv] 1 Jan 2019 Mapping Areas using Computer Vision Algorithms and Drones Bashar Alhafni Saulo Fernando Guedes Lays Cavalcante Ribeiro Juhyun Park Jeongkyu Lee University of Bridgeport. Bridgeport, CT, 06606. United States

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

On-line Document Registering and Retrieving System for AR Annotation Overlay

On-line Document Registering and Retrieving System for AR Annotation Overlay On-line Document Registering and Retrieving System for AR Annotation Overlay Hideaki Uchiyama, Julien Pilet and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku Yokohama, Japan {uchiyama,julien,saito}@hvrl.ics.keio.ac.jp

More information

Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality

Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality International Journal of Future Computer and Communication, Vol. 2, No. 5, October 213 Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality Kah Pin Ng, Guat Yew Tan, and

More information

Camera tracking by online learning of keypoint arrangements using LLAH in augmented reality applications

Camera tracking by online learning of keypoint arrangements using LLAH in augmented reality applications Virtual Reality (2011) 15:109 117 DOI 10.1007/s10055-010-0173-7 SI: AUGMENTED REALITY Camera tracking by online learning of keypoint arrangements using LLAH in augmented reality applications Hideaki Uchiyama

More information

Fuzzy based Multiple Dictionary Bag of Words for Image Classification

Fuzzy based Multiple Dictionary Bag of Words for Image Classification Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2196 2206 International Conference on Modeling Optimisation and Computing Fuzzy based Multiple Dictionary Bag of Words for Image

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Determinant of homography-matrix-based multiple-object recognition

Determinant of homography-matrix-based multiple-object recognition Determinant of homography-matrix-based multiple-object recognition 1 Nagachetan Bangalore, Madhu Kiran, Anil Suryaprakash Visio Ingenii Limited F2-F3 Maxet House Liverpool Road Luton, LU1 1RS United Kingdom

More information

N3M: Natural 3D Markers for Real-Time Object Detection and Pose Estimation

N3M: Natural 3D Markers for Real-Time Object Detection and Pose Estimation N3M: Natural 3D Markers for Real-Time Object Detection and Pose Estimation Stefan Hinterstoisser Selim Benhimane Nassir Navab Department of Computer Science, Technical University of Munich Boltzmannstr.

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Local features and image matching. Prof. Xin Yang HUST

Local features and image matching. Prof. Xin Yang HUST Local features and image matching Prof. Xin Yang HUST Last time RANSAC for robust geometric transformation estimation Translation, Affine, Homography Image warping Given a 2D transformation T and a source

More information

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier

Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Simultaneous Recognition and Homography Extraction of Local Patches with a Simple Linear Classifier Stefan Hinterstoisser 1, Selim Benhimane 1, Vincent Lepetit 2, Pascal Fua 2, Nassir Navab 1 1 Department

More information

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

Image Features: Detection, Description, and Matching and their Applications

Image Features: Detection, Description, and Matching and their Applications Image Features: Detection, Description, and Matching and their Applications Image Representation: Global Versus Local Features Features/ keypoints/ interset points are interesting locations in the image.

More information

/10/$ IEEE 4048

/10/$ IEEE 4048 21 IEEE International onference on Robotics and Automation Anchorage onvention District May 3-8, 21, Anchorage, Alaska, USA 978-1-4244-54-4/1/$26. 21 IEEE 448 Fig. 2: Example keyframes of the teabox object.

More information

Video Google: A Text Retrieval Approach to Object Matching in Videos

Video Google: A Text Retrieval Approach to Object Matching in Videos Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic, Frederik Schaffalitzky, Andrew Zisserman Visual Geometry Group University of Oxford The vision Enable video, e.g. a feature

More information

3D Visualization through Planar Pattern Based Augmented Reality

3D Visualization through Planar Pattern Based Augmented Reality NATIONAL TECHNICAL UNIVERSITY OF ATHENS SCHOOL OF RURAL AND SURVEYING ENGINEERS DEPARTMENT OF TOPOGRAPHY LABORATORY OF PHOTOGRAMMETRY 3D Visualization through Planar Pattern Based Augmented Reality Dr.

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

A Rapid Automatic Image Registration Method Based on Improved SIFT

A Rapid Automatic Image Registration Method Based on Improved SIFT Available online at www.sciencedirect.com Procedia Environmental Sciences 11 (2011) 85 91 A Rapid Automatic Image Registration Method Based on Improved SIFT Zhu Hongbo, Xu Xuejun, Wang Jing, Chen Xuesong,

More information

A Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking

A Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking A Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking Masayuki Kanbara y, Hirofumi Fujii z, Haruo Takemura y and Naokazu Yokoya y ygraduate School of Information Science, Nara

More information

Video Analysis for Augmented and Mixed Reality. Kiyoshi Kiyokawa Osaka University

Video Analysis for Augmented and Mixed Reality. Kiyoshi Kiyokawa Osaka University Video Analysis for Augmented and Mixed Reality Kiyoshi Kiyokawa Osaka University Introduction Who am I? A researcher on AR / MR / VR / 3DUI / CSCW / Wearable Comp. Visualization / Wearable computing /

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Carmen Alonso Montes 23rd-27th November 2015

Carmen Alonso Montes 23rd-27th November 2015 Practical Computer Vision: Theory & Applications 23rd-27th November 2015 Wrap up Today, we are here 2 Learned concepts Hough Transform Distance mapping Watershed Active contours 3 Contents Wrap up Object

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

Yudistira Pictures; Universitas Brawijaya

Yudistira Pictures; Universitas Brawijaya Evaluation of Feature Detector-Descriptor for Real Object Matching under Various Conditions of Ilumination and Affine Transformation Novanto Yudistira1, Achmad Ridok2, Moch Ali Fauzi3 1) Yudistira Pictures;

More information

Relative Posture Estimation Using High Frequency Markers

Relative Posture Estimation Using High Frequency Markers The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Relative Posture Estimation Using High Frequency Markers Yuya Ono, Yoshio Iwai and Hiroshi

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Stable Keypoint Recognition using Viewpoint Generative Learning

Stable Keypoint Recognition using Viewpoint Generative Learning Stable Keypoint Recognition using Viewpoint Generative Learning Takumi Yoshida 1, Hideo Saito 1, Masayoshi Shimizu 2 and Akinori Taguchi 2 1 Keio University, Minato, Tokyo, Japan 2 Fujitsu Laboratories,

More information

DARTs: Efficient scale-space extraction of DAISY keypoints

DARTs: Efficient scale-space extraction of DAISY keypoints DARTs: Efficient scale-space extraction of DAISY keypoints David Marimon, Arturo Bonnin, Tomasz Adamek, and Roger Gimeno Telefonica Research and Development Barcelona, Spain {marimon,tomasz}@tid.es Abstract

More information

Construction of Feature Landmark Database Using Omnidirectional Videos and GPS Positions

Construction of Feature Landmark Database Using Omnidirectional Videos and GPS Positions Construction of Feature Landmark Database Using Omnidirectional Videos and GPS Positions Sei IKEDA 1, Tomokazu SATO 1, Koichiro YAMAGUCHI 1,2 and Naokazu YOKOYA 1 1 Nara Institute of Science and Technology

More information

Object Detection by Point Feature Matching using Matlab

Object Detection by Point Feature Matching using Matlab Object Detection by Point Feature Matching using Matlab 1 Faishal Badsha, 2 Rafiqul Islam, 3,* Mohammad Farhad Bulbul 1 Department of Mathematics and Statistics, Bangladesh University of Business and Technology,

More information

A Novel Feature Descriptor Invariant to Complex Brightness Changes

A Novel Feature Descriptor Invariant to Complex Brightness Changes A Novel Feature Descriptor Invariant to Complex Brightness Changes Feng Tang, Suk Hwan Lim, Nelson L. Chang Hewlett-Packard Labs Palo Alto, California, USA {first.last}@hp.com Hai Tao University of California,

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

ADAPTIVE REAL-TIME OBJECT RECOGNITION FOR AUGMENTED REALITY

ADAPTIVE REAL-TIME OBJECT RECOGNITION FOR AUGMENTED REALITY ADAPTIVE REAL-TIME OBJECT RECOGNITION FOR AUGMENTED REALITY Ondrej Popelka, David Prochazka, Jan Kolomaznik, Jaromir Landa, Tomas Koubek Mendel University in Brno Faculty of Business and Economics Department

More information

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS Tatsuya Hanayama 1 Shota Kiyota 1 Ryo Furukawa 3 Hiroshi Kawasaki 1 1 Faculty of Engineering, Kagoshima

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

FESID: Finite Element Scale Invariant Detector

FESID: Finite Element Scale Invariant Detector : Finite Element Scale Invariant Detector Dermot Kerr 1,SonyaColeman 1, and Bryan Scotney 2 1 School of Computing and Intelligent Systems, University of Ulster, Magee, BT48 7JL, Northern Ireland 2 School

More information

Invariant Features from Interest Point Groups

Invariant Features from Interest Point Groups Invariant Features from Interest Point Groups Matthew Brown and David Lowe {mbrown lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract This paper

More information

Available online at ScienceDirect. Procedia Computer Science 22 (2013 )

Available online at   ScienceDirect. Procedia Computer Science 22 (2013 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 22 (2013 ) 945 953 17 th International Conference in Knowledge Based and Intelligent Information and Engineering Systems

More information

Optimized Methodologies for Augmented Reality Markers Based Localization

Optimized Methodologies for Augmented Reality Markers Based Localization Proceedings of the 214 IEEE International Conference on Robotics and Biomimetics December 5-1, 214, Bali, Indonesia Optimized Methodologies for Augmented Reality Markers Based Localization Wing Kwong Chung,

More information

Examination of Hybrid Image Feature Trackers

Examination of Hybrid Image Feature Trackers Examination of Hybrid Image Feature Trackers Peter Abeles Robotic Inception pabeles@roboticinception.com Abstract. Typical image feature trackers employ a detect-describe-associate (DDA) or detect-track

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

A Hand Gesture Recognition Method Based on Multi-Feature Fusion and Template Matching

A Hand Gesture Recognition Method Based on Multi-Feature Fusion and Template Matching Available online at www.sciencedirect.com Procedia Engineering 9 (01) 1678 1684 01 International Workshop on Information and Electronics Engineering (IWIEE) A Hand Gesture Recognition Method Based on Multi-Feature

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

Local Image Features

Local Image Features Local Image Features Computer Vision Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Flashed Face Distortion 2nd Place in the 8th Annual Best

More information

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image

SURF. Lecture6: SURF and HOG. Integral Image. Feature Evaluation with Integral Image SURF CSED441:Introduction to Computer Vision (2015S) Lecture6: SURF and HOG Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Speed Up Robust Features (SURF) Simplified version of SIFT Faster computation but

More information

Fast Image Matching Using Multi-level Texture Descriptor

Fast Image Matching Using Multi-level Texture Descriptor Fast Image Matching Using Multi-level Texture Descriptor Hui-Fuang Ng *, Chih-Yang Lin #, and Tatenda Muindisi * Department of Computer Science, Universiti Tunku Abdul Rahman, Malaysia. E-mail: nghf@utar.edu.my

More information

Real-Time Geometric Registration Using Feature Landmark Database for Augmented Reality Applications

Real-Time Geometric Registration Using Feature Landmark Database for Augmented Reality Applications Real-Time Geometric Registration Using Feature Landmark Database for Augmented Reality Applications Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

Object Recognition Based on ORB Descriptor for Markerless Augmented Reality

Object Recognition Based on ORB Descriptor for Markerless Augmented Reality Object Recognition Based on ORB Descriptor for Markerless Augmented Reality Mahfoud HAMIDIA 1,2, Nadia ZENATI-HENDA 1 1 Centre de Développement des Technologies Avancées, CDTA, B.P. 17, 16303, Baba-Hassen,

More information

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes Yuo Uematsu and Hideo Saito Keio University, Dept. of Information and Computer Science, Yoohama, Japan {yu-o,

More information

Multiple-Choice Questionnaire Group C

Multiple-Choice Questionnaire Group C Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right

More information

AUGMENTED REALITY MARKERS AS SPATIAL INDICES FOR INDOOR MOBILE AECFM APPLICATIONS

AUGMENTED REALITY MARKERS AS SPATIAL INDICES FOR INDOOR MOBILE AECFM APPLICATIONS November 1-2, 2012, Taipei, Taiwan AUGMENTED REALITY MARKERS AS SPATIAL INDICES FOR INDOOR MOBILE AECFM APPLICATIONS Chen Feng & Vineet R. Kamat University of Michigan, Ann Arbor, USA ABSTRACT: This paper

More information

Wide area tracking method for augmented reality supporting nuclear power plant maintenance work

Wide area tracking method for augmented reality supporting nuclear power plant maintenance work Journal of Marine Science and Application, Vol.6, No.1, January 2006, PP***-*** Wide area tracking method for augmented reality supporting nuclear power plant maintenance work ISHII Hirotake 1, YAN Weida

More information

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya

More information

Millennium 3 Engineering

Millennium 3 Engineering Millennium 3 Engineering Millennium 3 Engineering Augmented Reality Product Offerings ISMAR 06 Industrial AR Workshop www.mill3eng.com www.artag.net Contact: Mark Fiala mark.fiala@nrc-cnrc.gc.ca mark.fiala@gmail.com

More information

An Evaluation of Volumetric Interest Points

An Evaluation of Volumetric Interest Points An Evaluation of Volumetric Interest Points Tsz-Ho YU Oliver WOODFORD Roberto CIPOLLA Machine Intelligence Lab Department of Engineering, University of Cambridge About this project We conducted the first

More information

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT Technologies 2015, 3, 103-110; doi:10.3390/technologies3020103 OPEN ACCESS technologies ISSN 2227-7080 www.mdpi.com/journal/technologies Article A Hybrid Feature Extractor using Fast Hessian Detector and

More information

Comparison of Local Feature Descriptors

Comparison of Local Feature Descriptors Department of EECS, University of California, Berkeley. December 13, 26 1 Local Features 2 Mikolajczyk s Dataset Caltech 11 Dataset 3 Evaluation of Feature Detectors Evaluation of Feature Deriptors 4 Applications

More information

Learning Efficient Linear Predictors for Motion Estimation

Learning Efficient Linear Predictors for Motion Estimation Learning Efficient Linear Predictors for Motion Estimation Jiří Matas 1,2, Karel Zimmermann 1, Tomáš Svoboda 1, Adrian Hilton 2 1 : Center for Machine Perception 2 :Centre for Vision, Speech and Signal

More information

The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching

The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching The Gixel Array Descriptor (GAD) for Multi-Modal Image Matching Guan Pang University of Southern California gpang@usc.edu Ulrich Neumann University of Southern California uneumann@graphics.usc.edu Abstract

More information

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Visual Registration and Recognition Announcements Homework 6 is out, due 4/5 4/7 Installing

More information

Development of an Augmented Reality System for Plant Maintenance Support

Development of an Augmented Reality System for Plant Maintenance Support Development of an Augmented Reality System for Plant Maintenance Support Hirotake Ishii, Koji Matsui, Misa Kawauchi, Hiroshi Shimoda and Hidekazu Yoshikawa Graduate School of Energy Science, Kyoto University

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic

More information