Relative Posture Estimation Using High Frequency Markers

Similar documents
Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera

Estimating Camera Position And Posture by Using Feature Landmark Database

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

An Initialization Tool for Installing Visual Markers in Wearable Augmented Reality

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

A Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking

Real-Time Geometric Registration Using Feature Landmark Database for Augmented Reality Applications

Measurement of Pedestrian Groups Using Subtraction Stereo

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Wide area tracking method for augmented reality supporting nuclear power plant maintenance work

EASY PROJECTOR AND MONOCHROME CAMERA CALIBRATION METHOD USING PLANE BOARD WITH MULTIPLE ENCODED MARKERS

A Novel Video Retrieval Method to Support a User's Recollection of Past Events Aiming for Wearable Information Playing

Vision-Based Registration for Augmented Reality with Integration of Arbitrary Multiple Planes

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Flexible Calibration of a Portable Structured Light System through Surface Plane

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

3D Corner Detection from Room Environment Using the Handy Video Camera

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

The Uncertainty of Parallel Model Coordinate Measuring Machine

Construction of Feature Landmark Database Using Omnidirectional Videos and GPS Positions

Camera Recovery of an Omnidirectional Multi-camera System Using GPS Positions

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

A Wearable Augmented Reality System Using an IrDA Device and a Passometer

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

A Tactile Sensing for Estimating the Position and Orientation of a Joint-Axis of a Linked Object

Absolute Scale Structure from Motion Using a Refractive Plate

3D Reconstruction of Outdoor Environments from Omnidirectional Range and Color Images

An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement

Task analysis based on observing hands and objects by vision

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Omni-directional Multi-baseline Stereo without Similarity Measures

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

MR-Mirror: A Complex of Real and Virtual Mirrors

Time-to-Contact from Image Intensity

A Simulation Study and Experimental Verification of Hand-Eye-Calibration using Monocular X-Ray

Computer Vision Technology Applied to MR-Based Pre-visualization in Filmmaking

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Development of an Augmented Reality System for Plant Maintenance Support

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

POSE ESTIMATION OF CONSTRUCTION MATERIALS USING MULTIPLE ID DEVICES FOR CONSTRUCTION AUTOMATION

Simple and Robust Tracking of Hands and Objects for Video-based Multimedia Production

Visual Servoing Utilizing Zoom Mechanism

Evaluation of Image Processing Algorithms on Vehicle Safety System Based on Free-viewpoint Image Rendering

Inspection of Visible and Invisible Features of Objects with Image and Sound Signal Processing

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Optical Flow-Based Person Tracking by Multiple Cameras

HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS

Direct Plane Tracking in Stereo Images for Mobile Navigation

Robot Localization based on Geo-referenced Images and G raphic Methods

Segmentation and Tracking of Partial Planar Templates

AUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION OF OBJECTS OF DIGITIZED HERITAGE. Alexander Kolev, Dimo Dimov

Image processing and features

Super Wide Viewing for Tele-operation

APPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R.

3D MODELING OF OUTDOOR SCENES BY INTEGRATING STOP-AND-GO AND CONTINUOUS SCANNING OF RANGEFINDER

Chapter 3 Image Registration. Chapter 3 Image Registration

Skeleton Cube for Lighting Environment Estimation

arxiv: v1 [cs.cv] 18 Sep 2017

Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot

Multi-view Surface Inspection Using a Rotating Table

Hand-Eye Calibration from Image Derivatives

Inverse Kinematics of a Rhino Robot

Vision-Based Hand Detection for Registration of Virtual Objects in Augmented Reality

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

Background Estimation for a Single Omnidirectional Image Sequence Captured with a Moving Camera

3D Modeling of Outdoor Environments by Integrating Omnidirectional Range and Color Images

Fig. 3 : Overview of the localization method. Fig. 4 : Overview of the key frame matching.

A Robust Two Feature Points Based Depth Estimation Method 1)

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

3D Environment Reconstruction

Targetless Calibration of a Lidar - Perspective Camera Pair. Levente Tamás, Zoltán Kató

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

EE795: Computer Vision and Intelligent Systems

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Gesture Recognition using Temporal Templates with disparity information

A 100Hz Real-time Sensing System of Textured Range Images

Collecting outdoor datasets for benchmarking vision based robot localization

A Wearable Augmented Reality System Using Positioning Infrastructures and a Pedometer

Simplified Orientation Determination in Ski Jumping using Inertial Sensor Data

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Image-Based Memory of Environment. homing uses a similar idea that the agent memorizes. [Hong 91]. However, the agent nds diculties in arranging its

CS 4495/7495 Computer Vision Frank Dellaert, Fall 07. Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Shape from Textures Phenomenon for 3D Modeling Applications with Big Data Images

Light source estimation using feature points from specular highlights and cast shadows

Computers & Graphics

Intrinsic and Extrinsic Camera Parameter Estimation with Zoomable Camera for Augmented Reality

Specular 3D Object Tracking by View Generative Learning

AR-Navi: An In-Vehicle Navigation System Using Video-Based Augmented Reality Technology

Availability of Multi-Preview Control of PA10 with Avoidance Manipulability Analyses

Transcription:

The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Relative Posture Estimation Using High Frequency Markers Yuya Ono, Yoshio Iwai and Hiroshi Ishiguro Abstract Recently, research fields of augmented reality and robot navigation are actively investigated. Estimating a relative posture between an object and a camera is an important task in these fields Visual markers are frequently used to estimate a relative posture between an object and a camera, but the usage of visual markers spoils a scene. In this paper, we propose a novel method for posture estimation by using kernel regressions and high frequency markers that do not spoil a scene. The markers are embedded in an object s texture in the high frequency domain and it is hard for people to notice the markers. We observe changes of spatial frequency of object s texture by using a high-definition camera to estimate the current posture of an object. We show the effectiveness of our method by experimental results with both simulated and real data. I. INTRODUCTION In recent years, research fields of augmented reality and robot navigation are actively investigated. In augmented reality, many methods have been proposed to provide useful information to users by CG images superimposed on a real images at geometrically right position[1]. In robot navigation systems, visual information from a camera is very important because a camera is a passive sensor. Therefore, there are many researches that estimate a current position and/or a movement path from an image sensor[2]. In the above research fields, estimating a relative position/posture between a camera and an object is still important problem. Various methods for estimating a relative posture have been proposed until now. These methods are separated into two types: vision based[3] and special purpose sensor based method[4], [5]. GPS, gyro sensor, and magnetic sensor are often used as a special purpose sensor for estimating a posture. Kanbara et al. proposed a system using a GPS and a small inertial navigation sensor to compensate the disadvantages of both sensors[4]. Matsuda et al. proposed a system using a small inertial navigation sensor and a vision sensor[5]. In general, a method using multiple sensors achieves the high accuracy of posture estimation, but a systems is tend to be complicated. Vision based approach estimates a relative posture from visual information retrieved by feature extraction and/or edge detection. Oe et al. proposed a method for posture estimation by using natural feature points from an omnidirectional image sequence[3]. Lepetit et al. proposed a method for posture estimation by using the correspondence between image feature points and those of a CAD model[6]. The above methods, however, use prior knowledge of an object Graduate School of Engineering Science, Osaka University, 1 3 Machikaneyama, Toyonaka, Osaka 560 8531 Japan iwai@sys.es.osaka-u.ac.jp and/or environment, so it is difficult to apply the methods in unknown environment. Marker tracking approach is also used for posture estimation of an object. Kato et al. proposed a method for posture estimation in real time by using rectangle marker in an image[7]. As an application of this research, Habara et al. proposed a system that provides user s location in an indoor environment[8]. Marker tracking approach requires that visual markers should be fixed at an object, so it spoils a scene. To avoid the above problem, recursive reflection materials are used as invisible markers[9], but the cost of the materials is still a problem. Tenmoku et al. proposed a method for posture estimation by using a special designed poster[10]. Sato et al. proposed a coded wall paper for posture estimation[11]. These methods take into consideration of scene, but the design of posters or wall papers is highly restricted. In this paper, we propose a novel method for posture estimation by using kernel regressions and high frequency markers that do not spoil a scene. The markers are embedded in an object s texture in the high frequency domain and it is hard for people to notice the markers because of the ability of human eyes. We observe changes of spatial frequency of object s texture by using a high-definition camera to estimate the current posture of an object. We show the effectiveness of our method by experimental results with both simulated and real data. II. PROBLEM SETTINGS In this research, relative posture between a camera and a object is estimated from a image embedded visual markers in the high frequency domain. We explain the problem settings in this section. As shown in Fig. 1, relative posture of an object in the marker coordinate system O m X m Y m Z m can be represented by the translation matrix dt and the rotation matrix parametrized by rotation angles (θ x,θ y,θ z ) T in the camera coordinate system O c X c Y c Z c. In this research, we assume that object rotation can be decomposed as R = R x (θ x )R y (θ y )R z (θ z ), and we also assume that the initial posture is (θ x,θ y,θ z ) T =(0,0,0) T. d is the relative depth between the current posture and the initial posture. The proposed method can estimate the relative rotation parameter (θ x,θ y,θ z ) T and the relative depth d from visual changes of an object in the high frequency domain caused by posture change of an object embedded visual markers. 978-1-4244-6676-4/10/$25.00 2010 IEEE 5214

Embedded markers Fig. 1. Coordinates and problem settings III. RELATIVE POSTURE ESTIMATION A. Marker Embedding The outline of marker embedding is shown in Fig. 2. When an image is transformed with 2D Fourier transform, the lowest frequencies usually contain most of the information. We embed visual markers in the high frequency domain, which does not influence its appearance so much because of the ability of the human eye. Visual markers are embedded in an elliptical shape in a certain interval in the high frequency domain as shown in Fig. 2, and then a marker image is acquired by the inverse 2D Fourier transform. Relative posture is estimated by observing frequency changes in a marker image caused by object s posture changes. Unlike the conventional researches[13], [14], [15], we need not the assumption or the prior knowledge of characteristics of object texture because of the above approach. Therefore, the proposed method can use printed materials embedded visual markers as marker images without the knowledge of texture information of the printed materials. Moreover, our method uses high frequency domain which is insensitive for human eye, and has the advantage that marker images used in our method do not spoil a scene. B. Image Feature As described in the previous section, visual markers are embedded in a elliptical shape in the high frequency domain of a marker image. Let α and β be lengths of the axes of the ellipsoid, and θ 1 and θ 2 be a rotation angle and a phase of the ellipsoid, respectively. α and β are parameters sensitive to a relative distance between a camera and an object, so we introduce the parameter γ = α β which is robust for distance changes. Five parameters, α,β,γ,θ 1 andθ 2, are used as image features as shown in Fig. 3. The process flow of the proposed method is shown in Fig. 4. Figure 5 shows an example of power spectrum of a marker image, and Fig. 6 shows an result of image processing for marker detection. Ellipse detection is performed by energy minimization of the following equation: Q 2 =(a x 2 + b y 2 + c + 2 f x + 2g y + 2h xy). (1) Fig. 2. Fig. 3. Overview of marker embedding Features in the frequency domain α,β,andθ 1 can be calculated from the above parameters. Phase parameter θ 2 can be estimated from the maximum angle of the correlation between an ellipsoid in an input image and one of the initial posture. We consider an indicator function s(θ) that becomes 1 if a marker exists in θ direction and becomes 0 if a maker does not exist in θ direction. Let s 1 (θ) be an indicator function of the initial posture and s 2 (θ) be an indicator function of the current posture. The correlation between s 1 (θ) and s 2 (θ) is defined and the maximum correlation angle is estimated by the following equation: 2π θ 2 = argmax s 1 (θ + Δθ) s 2 (θ)dθ. (2) Δθ 0 Fig. 4. Input Image Low-cut Filter Binarization Marker Detection Parameter Estimation Process flow of parameter estimation 5215

Fig. 5. Power spectrum Fig. 6. Markers Estimation result Low frequency mask Estimation result Mean square error [deg] Even if the scales of two ellipsoids are different with each other, we can estimate θ 2 because of the indicator function s(θ). C. Relative Posture Estimation by Kernel Regressions Relative posture, θ x,θ y,θ z, between a camera and an object is estimated from image features, α,β,γ,θ 1,θ 2,bykernel regressions that map image features to posture parameters. In this paper, we use a support vector regression (SVR)[16] as a kernel regression. SVR is expressed by the following equation: y = f (x)= D i=1 w i K(x i,x)+b, (3) where D is the number of dimension of image features, w i (i = 1,,D) is a weight, and b is an offset. We use a polynomial kernel as a kernel function K: K(x,y)=(s x T y + o) d, (4) where s,o, and d are hyper parameters and are determined by a preliminary experiment. SVR outputs the displacements Δθ x,δθ y, and Δθ z from the previous posture θ x,t 1,θ y,t 1,θ z,t 1 and image features: γ,θ 1,θ 2, and then update the current posture θ x,t,θ y,t,θ z,t The relative depth between a camera and an object can be estimated from the previous position because of the following equation: α 1 α 2 = k d1 d 2, (5) where d 1 and d 2 are depths in the previous and current posture, respectively. α 1 and α 2 are image features in the previous and the current posture, respectively. IV. EXPERIMENTAL RESULTS A. Hyperparameters In order to use SVR for posture estimation, hyperparameters s,o, andd should be determined in advance. Therefore, we generated synthesized images rotated at random with θ x,θ y,θ z {0,10,20,...,50,60} [deg], extracted image features from the images, and then applied SVR learning algorithm to the training data. From this preliminary experiment, the number of dimension d has a larger effect to the training error than another parameters, so s and o are fixedtobe1.figures7and8 show the results of the preliminary experiment. From this result, the number of dimension d is fixed to be 5 for posture estimation, and fixed to be 4 for depth estimation in experiments in the following section. Mean square error [x10 6 mm] Fig. 7. Fig. 8. Number of dimensions Training errors of θ x,θ y,θ z Dimension Training error of d B. Experiment with Simulation Data Table II shows estimation errors (generalization errors) by using training data in Table I. From this result, estimation errors are about 5 degrees and are enough for posture estimation. In order to estimate various posture changes, we make sequences of synthesized images rotated at random with θ x,θ y [ 60,60],θ z [ 180,180], and then SVR is learned by this training data. Figures 9,10 and 11 show estimation results of one sequence. RMSE of θ x,θ y,andθ z in this experiment are 3.08, 2.63, and 2.28, respectively. From the TABLE I TRAINING DATA Training data range of posture Set 1 θ x [0,60],θ y [0,60],θ z [0,60] Set 2 θ x [0,60],θ y [0,60],θ z [ 60,0] Set 3 θ x [0,60],θ y [ 60,0],θ z [0,60] Set 4 θ x [0,60],θ y [ 60,0],θ z [ 60,0] TABLE II ESTIMATION ERROR parameter rooted mean square error [deg] θ x 4.00 θ y 4.19 θ z 1.04 5216

Fig. 9. estimation results of θ x Fig. 12. estimation results of θ x Fig. 10. estimation results of θ y Fig. 13. estimation results of θ y results, the accuracy of posture estimation is enough for virtual reality application. However, as shown in Figs. 12, 13 and 14, the sign of some results of posture estimation is reversed. This is because the appearances of images are almost same nevertheless the rotation angles are different as shown in Figs. 15 and 16. In that case, the accuracy of the proposed method becomes worse because γ,θ 1 and θ 2 are almost equal if there is no difference between two images. C. Experiment with Real Data We also conducted experiments in the real data to show the effectiveness of our method. We made a B1 size wall paper for this experiment as shown in Fig. 17. The markers of a motion capture system are attached to the wall paper, and motion data from a motion capture system are used as true values. The specification of a camera used in this experiment is shown in Table III. The wall paper is moved at the distance about 3000 [mm] from the camera. Figures 18, 19 and 20 show estimation results of posture parameters, and Fig. 21 show estimation results of the distance between a camera and an object. The rooted mean square errors of estimated parameters are shown in Table IV. From the experimental results, posture parameters, θ x,θ y,andθ z, can be estimated within ±3.3 [deg], and the estimation error of the distance is about ±2.3 10 2 [mm]. Estimation error of the distance becomes worse if the distance between a camera and object becomes small. This is because visual markers are shifted to the lower frequency domain and visual changes in the Fig. 11. estimation results of θ z Fig. 14. estimation results of θ z 5217

TABLE III CAMERA SPECIFICATION device size focal length image size FOVEON X3 (CMOS) 20.7 13.8 [mm] 200 [mm] 2640 1760 [pixels] Fig. 15. (θ x,θ y,θ z )=(20,30,40) Fig. 18. Estimation results of θ x Fig. 16. (θ x,θ y,θ z )=( 20, 30,40) the following equation: 1.49 10 2 m = 1.55. (6) 0.959 10 2 Let e rmse be a rooted mean square error of an axis, estimation error of rotation angle can be calculated as follows: lower frequency domain are smaller than those in the higher frequency domain. We have also conducted another experiment with the training data in Table I. Figure 22 show examples of posture estimation. From this figure, the accuracy of posture estimation is enough for applications of augmented reality. We investigate estimation errors between true rotation R true and predicted rotation R by Frobenius norm. The values of Frobenius norm in simulation data and real data are shown in Table V. From this table, the scaling factor m of estimation error of rotation matrix in the real experiment can be estimated by e = arccos(1 m (1 cos(e rmse ))). (7) By the above calculation, the estimation error in real scene can be estimated as shown in Table VI. Markers for motion capture system Fig. 19. Estimation results of θ y TABLE IV EVALUATION OF ESTIMATION RESULTS IN REAL IMAGES Fig. 17. Wall paper used in this experiment parameter θ x θ y θ z d rooted mean square error 3.32 [deg] 3.03 [deg] 0.77 [deg] 2.25 10 2 [mm] 5218

Fig. 20. Estimation results of θ z Fig. 22. examples of posture estimation results: input image (left) and superimposed image (right) Distance [mm] Fig. 21. Estimation results of distance V. CONCLUSION AND FUTURE WORK This paper proposes a method for relative posture estimation by using visual markers which are embedded in an object s texture in the high frequency domain, and we show that relative posture can be estimated by kernel regressions. We embedded visual markers as an elliptical shape in this experiment, but our method does not restrict an elliptical shape. Our method requires one of parametric shape or pattern for posture estimation. In future work, we investigate the best shape for embedding which is robust for motion and focal blur. TABLE V FROBENIUS NORMS OF ESTIMATION RESULTS simulation real scene R true R f 0.959 10 2 1.49 10 2 TABLE VI PREDICTION VALUES OF ESTIMATION ERRORS estimation parameter θ x θ y θ z rooted mean square error 4.98 [deg] 5.22 [deg] 1.34 [deg] REFERENCES [1] F. Zhou, Henry B.L. Duh, and M. Billinghurst: Trends in Augmented Reality Tracking, Interaction and Display: A Review of Ten Years of ISMAR, Proc. ISMAR 2008, pp. 193 202 (2008-9) [2] Y. Matsumoto, M. Inaba, and H. Inoue: View-Based Approach to Robot Navigation, Journal of the Robotics Society of Japan, Vol. 20, No. 5, pp. 506-514 (2002-7) [3] M. Oe, T. Sato, and N. Yokoya: Camera Position and Posture Estimation Based on Feature Landmark Database for Geometric Registration, Trans. of the Virtual Reality Society of Japan, Vol. 10, No. 3, pp. 285 294 (2005) [4] M. Kanbara and N. Yokoya: Outdoor Augmented Reality System Using RTK-GPS and Inertial Navigation System, Technical report of IEICE. PRMU, Vol. 104, No. 572, pp. 37-42 (2005-1) [5] K. Matsuda, S. Ikeda, T. Sato, and N. Yokoya: Robust Estimation of Position and Posture of Camera with High-speed Rotation Using Landmark Database and Inertial Sensor, IEICE technical report, Vol. 106, No.535, pp. 5 10 (2007-2) [6] V. Lepetit, L. Vacchetti, D. Thalmann, and P. Fua: Fully automated and stable registration for augmented reality applications, Proc. 2nd IEEE/ACM Int. Symp. on Mixed and Augmented Reality, pp. 93 102 (2003-10) [7] H. Kato, M. Billinghurst, K. Asano, and K. Tachibana: An Augmented Reality System and its Calibration based on Marker Tracking, Transactions of the Virtual Reality Society of Japan, Vol.4,No.4, pp. 607 616 (1999-12) (in Japanese) [8] T. Habara, T. Machida, K. Kiyokawa, and H. Takemura: Wide-Area Indoor Position Detection using Fiducial Markers for Wearable PC, IEICE technical report. Image engineering, Vol. 103, No. 643, pp. 77-82 (2004-1) [9] Y. Nakazato, M. Kanbara, and N. Yokoya: A User Localization and Marker Initialization System Using Invisible Markers, Trans. of the Virtual Reality Society of Japan, Vol. 13, No. 2, pp. 257-266 (2008-6) [10] R. Tenmoku, A. Nishigami, F. Shibata, A. Kimura, and H. Tamura A Geometric Registration Method Using Wall Posters as AR-Markers, Trans. of the Virtual Reality Society of Japan, Vol. 14, No. 3, pp. 351 360 (2009) [11] S. Sato, T. Tanikawa, and M. Hirose: Indoor position tracking system using interior decorations with coded patterns, Technical report of IEICE. Multimedia and virtual environment, Vol. 106, No. 91, pp. 1-6 (2006-5) [12] C. Cachin: An information-theoretic model for steganography, Information and Computation, Vol. 192, pp. 41 56 (2004-7) [13] A.P. Witkin. Recovering surface shape and orientaion from texture, Artificial Intelligence, Vol. 17, pp. 17 45 (1981) [14] B.J. Super and A.C. Bovik: Planar surface orientation from texture spatial frequencies, Pattern Recognition, Vol. 28, No. 5:729 743 (1995-5) [15] T.Tsuji and Y. Yoshida: Estimation of rotation and slant angles of a textured plane using spectral moments, IEICE Trans. on information and systems, Vol. 85, No. 3, pp. 600 (2002-3) [16] A.J. Smola, and B. Schölkopf: A tutorial on support vector regression, Statistics and Computing, Vol. 14, No. 3, pp.199 222 (2004-8) 5219