Human Activity Recognition Based on R Transform and Fourier Mellin Transform

Size: px
Start display at page:

Download "Human Activity Recognition Based on R Transform and Fourier Mellin Transform"

Transcription

1 Human Activity Recognition Based on R Transform and Fourier Mellin Transform Pengfei Zhu, Weiming Hu, Li Li, and Qingdi Wei Institute of Automation, Chinese Academy of Sciences, Beijing, China {pfzhu,wmhu,lli,qdwei}@nlpr.ia.ac.cn Abstract. Human activity recognition is attracting a lot of attention in the computer vision domain. In this paper we present a novel human activity recognition method based on R transform and Fourier Mellin Transform (FMT). Firstly, we convert the original image sequence to the Radon domain, get the R transform curves by R transform. Then we extract the Rotation-Scaling-Translation (RST) invariant features by FMT and use to have dimension reduction by PCA method. At the recognition stage, the Earth Mover s Distance (EMD) is used here. In the experiment, we compare our method to other methods. The experimental results show the effectiveness of our method. 1 Introduction Human activity recognition is an attractive direction of research in computer vision, which has wide application such as intelligent surveillance, analysis of the physical condition of people and caring of aged people [1]. Human activity recognition includes tracking, action features extraction and representation, action model learning and high level semantic understanding. The feature expression of activity recognition is a key step. But the video data are variant at the aspect of the scale angle and location with the carema, the job of feature extraction is very hard. So the extraction of view invariant features are attentioned by more and more researchers. Rao et al [2] present a computational representation of human action to capture these dramatic changes using spatio-temporal curvature of 2-D trajectory. This representation is compact, view-invariant, and is capable of explaining an action in terms of meaningful action units called dynamic instants and intervals. Ogale et al [3] represent human actions as short sequences of atomic body poses. Actions and their constituent atomic poses are extracted from a set of multiview multiperson video sequences by an automatic keyframe selection process, and are used to automatically construct a probabilistic context-free grammar (PCFG). Parameswaran and Chellappa [4] exploit a wealth of techniques in 2D invariance that can be used to advantage in 3D to 2D projection and model actions in terms of view-invariant canonical body poses and trajectories in 2D invariance space, leading to a simple and effective way to represent and recognize human actions from a general viewpoint. Weinland et al [5] introduce Motion History Volumes (MHV) as a free-viewpoint representation for human actions in the case of multiple calibrated, and background-subtracted, video cameras. They present algorithms for computing, aligning and com-paring MHVs of G. Bebis et al. (Eds.): ISVC 2009, Part II, LNCS 5876, pp , c Springer-Verlag Berlin Heidelberg 2009

2 632 P. Zhu et al. different actions performed by different people in a variety of viewpoints. Weinland et al [6] propose a new framework where they model actions using three dimensional occupancy grids, built from multiple viewpoints, in an exemplar-based HMM. The novelty is, that a 3D reconstruction is not required during the recognition phase, instead learned 3D exemplars are used to produce 2D image information that is compared to the observations. Parameters that describe image projections are added as latent variables in the recognition process. Li and Fukui [7] propose a novel view-invariant human action recognition method based on non-rigid factorization and Hidden Markov Models. Shen and Foroosh [8] show that fundamental ratios are invariant to camera parameters, and hence can be used to identify similar plane motions from varying viewpoints. For action recognition, they decompose a body posture into a set of point triplets (planes). The similarity between two actions is then determined by the motion of point triplets and hence by their associated fundamental ratios, providing thus view-invariant recognition of actions. Natarajan and Nevatia [9] present an approach to simultaneously track and recognize known actions that is robust to such variation, starting from a person detection in the standing pose. To tackle activity recognition, Gilbert et al [10] propose learning compound features that are assembled from simple 2D corners in both space and time. In this paper, we present a novel human activity recognition method based on R transform and Fourier Mellin Transform (FMT). Figure 1 shows the framework of our method. Fig. 1. Overview of our approach The rest of this paper is organized as follows. Section 2 shows the Radon transform and R transform. The Fourier mellin transform algorithm is introduced in section 3. To evaluate our method, the experiments are showed in section 4. Section 5 is the conclusion of our paper. Section 6 shows the references.

3 Human Activity Recognition Based on R Transform and FMT Radon Transform and R Transform In mathematics, two dimensional Radon transform is the transform consisting of the integral of a function over the set of lines in all directions, which is roughly equivalent to finding the projection of a shape on any given line. For a discrete binary image, each image is projected to the Radon domain. Let f (x, y) be an image, its Radon transform is defined[11][12]: T R f (ρ, θ) = f (x, y)δ(x cos θ + y sin θ ρ)dxdy = R adon { f (x, y)} (1) where θ [0,π], ρ [, ]andδ(.) is the Dirac delta function, δ(x) = { 1ifx = 0 0otherwise (2) For geometry transformation such as scaling, translation and rotation, Radon transform has the following properties: For a scaling factor α, For translation of (x 0, y 0 ), R adon { f ( x α, y α )} = 1 α T R f (αρ, θ) (3) R adon { f (x x 0, y y 0 )} = T R f (ρ x 0 cos θ y 0 sin θ, θ) (4) For rotation of θ 0 R adon { f θ0 (x, y)} = T R f (ρ, θ + θ 0) (5) From the equation (3)-(5), we can see that the Radon transform is variant at the aspects of scaling, translation and rotation. An improved representation of Radon transform, R transform, is introduced [13][12]: For a scaling factor α, R f (θ) = T 2 R f (ρ, θ)dθ (6) 1 α 2 For translation of (x 0, y 0 ), T 2 R (αρ, θ)dρ = 1 T 2 f α 3 R (ν, θ)dν = 1 f α R 3 f (θ) (7) T 2 R f ((ρ x 0 cos θ y 0 sin θ),θ)dρ = T 2 R f (ν, θ)dν = R f (θ) (8) For rotation of θ 0 T 2 R f (ρ, (θ + θ 0 ))dρ = R f (θ + θ 0 ) (9)

4 634 P. Zhu et al. Fig. 2. The Radon transform and the R transform of the example images From the equation (7)-(9), we can see that the R transform is invariant at the aspect of translation, a scaling changing can reach an amplitude scaling, and a rotation results in a phase sift. In the experiments, we normalize the R transform curve to get the scaling invariance by equation (10). R (θ) = R(θ) max(r(θ)) The Figure 2 shows the Radon transform and the R transform of the example images. (10)

5 3 Fourier Mellin Transform Human Activity Recognition Based on R Transform and FMT 635 The use of the Fourier Mellin Transform for rigid image registration was proposed in [14], that is to match images that are translated, rotated and scaled with respect to one another. Let F 1 (ξ, η) andf 2 (ξ, η) be the Fourier transforms of images f 1 (x, y) and f 2 (x, y), respectively. If f 2 differs from f 1 only by a displacement (x 0, y 0 )then or in frequency domain, using the fourier shift theorem f 2 (x, y) = f 1 (x x 0, y y 0 ), (11) F 2 (ξ, η) = e j2π(ξx 0+ηy o ) F 1 (ξ, η). (12) The cross-power spectrum is then defined as C(ξ, η) = F 1(ξ, η)f2 (ξ, η) = e j2π(ξx 0+ηy 0 ), (13) F 1 (ξ, η)f 2 (ξ, η) where F is the complex conjugate of F. The Fourier shift theorem guarantees that the phase of the cross-power spectrum is equivalent to the phase difference between the images. The inverse of (3) results in c(x, y) = δ(x x 0, y y 0 ), (14) which is approximately zero everywhere except at the optimal registration point. If f 1 and f 2 are related by a translation (x 0, y 0 ) and a rotation θ 0 then f 2 (x, y) = f 1 (x cos θ 0 + y sin θ 0 x 0, x sin θ 0 = y cos θ 0 y 0 ). (15) Using the Fourier translation property and the Fourier rotation property, we have F 2 (ξ, η) = e j2π(ξx 0+ηy o ) F 1 (ξ cos θ 0 + η sin θ 0, ξ sin θ 0 + η cos θ 0 ). (16) Let M 1 and M 2 be the magnitudes of F 1 and F 2, respectively. They are related by M 2 (ξ, η) = M 1 (ξ cos θ 0 + η sin θ 0, ξ sin θ 0 + η cos θ 0 ). (17) To recover the rotation, the Fourier magnitude spectra are transformed to polar representation M 1 (ρ, θ) = M 2 (ρ, θ θ 0 ) (18) where ρ and θ are the radius and angle in the polar coordinate system, respectively. Then, (3) can be applied to find ρ 0. If f 1 is a translated, rotated and scaled version of f 2, the Fourier magnitude spectra are transformed to log-polar representations and related by M 2 (ρ, θ) = M 1 (ρ/s,θ θ 0 ) (19) i.e. M 2 (log ρ, θ) = M 1 (log ρ log s,θ θ 0 ) (20)

6 636 P. Zhu et al. M 2 (ξ, θ) = M 1 (ξ d,θ θ 0 ) (21) where s is the scaling factor, ξ = log ρ and d = log s. 4 Experiments In our experiments, we use the Weizman dataset to evaluate our method with 93 videos of 9 actors and 10 actions (bend, jack, jump, pjump, run, side, skip, walk, wave1, wave2), the sample images are showed in Figure 3. Fig. 3. The example images of the Weizman dataset In our experiments, each silhouette image is normalized into a resolution. Firstly we convert the image to the Radon domain, get a R curve by the R transform. Before extract the invariant features by the fourier mellin transform, we convert the curve to a 2D R transform image. To get more compressed features, PCA is used here. Since the periods of the activities are not uniform, comparing sequences is not straightforward. In the case of human activities, the same activity can be performed in different speeds, resulting the sequence to be expanded or shrunk in time. In order to eliminate such effects of different speeds and to perform robust comparison, the Earth Mover s Distance (EMD) [15] is used in our experiment. The Earth Mover s Distance has been proved to have promising performance in image retrieval and visual tracking because it can find optimal signature alignment and thereby can measure the similarity accurately. For arbitrary two activity sequences P and Q, P = {(p i, w pi ), 1 i m}, Q = {(q i, w qi ), 1 i n}, wherem and n are the number of clusters in P and Q, respectively. The EMD between P and Q is computed by D(P, Q) = mi=1 nj=1 d ij f ij mi=1 nj=1 f ij (22) Where d ij is the Euclidean distance between p i and q j,and f ij is the optimal match between two signatures P and Q that can be computed by solving the Linear Programming problem.

7 Human Activity Recognition Based on R Transform and FMT 637 min WORK(P, Q, F) = s.t. f ij 0 n m n d ij f ij i=1 j=1 f ij w pi m f ij w qi j=1 i=1 m n m n f ij = min( w pi, w qi ) i=1 j=1 i=1 j=1 4.1 Experiment 1 In the experiment, we evaluate our method at the aspect of rotation, translation and scaling respectively. Figure 4, 5 show the correct recognition rates when the activity sequences are rotated or scaled. From the results, we can see that the correct rates of the rotated activities can right up to 90%. And the correct rates of the scaled ones can right up to 80%. As for the translated ones, the correct recognition rates are 100%. Fig. 4. The correct recognition rates of rotated activity sequences

8 638 P. Zhu et al. Fig. 5. The correct recognition rates of scaled activity sequences Fig. 6. The example images of our dataset 4.2 Experiment 2 In the experiment, we build a dataset including the datum of the Weizman dataset, the rotated sequences of the Weizman dataset at angle 30 o -30 o randomly, the translated sequences of the Weizman dataset, and the scaled image sequences of the Weizman dataset. The example images are showed in Figure 6. We compare our method to other methods, such as Zernike Moment, R transform in, Fourier Mellin Transform. Figure 7 shows the correct recognition rates. From the figure we can see that our methods are better than other three methods. Our RST-invariant features based on the R transform and the Fourier Mellin Transform is effective and can be used in human activity recognition.

9 Human Activity Recognition Based on R Transform and FMT 639 Fig. 7. The correct recognition rates 5 Conclusion In this paper we present a novel human activity recognition method based on R transform and Fourier Mellin Transform (FMT). Our feature extraction method is Rotation- Scaling-Translation invariant, which can be used in human activity recognition, especially when the camera is unstable. The experimental results show the effectiveness of our method. Acknowledgment This work is partly supported by NSFC (Grant No and ) and the National 863 High-Tech R&D Program of China (Grant No. 2006AA01Z453). References 1. Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behavior. IEEE Trans. on Systems, Man and Cybernetics, Part C: Applications and Reviews 37, (2004) 2. Rao, C., Yilmaz, A., Shah, M.: View-invariant representation and recognition of actions. International Journal of Computer Vision 50, (2002) 3. Ogale, A., Karapurkar, A., Aloimonos, Y.: View-invariant modeling and recognition of human actions using grammars. In: Workshop on Dynamical Vision at ICCV, vol. 5 (2005) 4. Parameswaran, V., Chellappa, R.: View invariance for human action recognition. International Journal of Computer Vision 66, (2006) 5. Weinland, D., Ronfard, R., Boyer, E.: Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding 104, (2006) 6. Weinland, D., Boyer, E., Ronfard, R.: Action recognition from arbitrary views using 3d exemplars. In: Proceedings of the International Conference on Computer Vision, pp. 1 7 (2007) 7. Li, X., Fukui, K.: View-invariant human action recognition based on factorization and hmms. EICE Transactions on Information and Systems, (2008)

10 640 P. Zhu et al. 8. Shen, Y., Foroosh, H.: View-invariant action recognition using fundamental ratios. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1 7 (2008) 9. Natarajan, P., Nevatia, R.: View and scale invariant action recognition using multiview shapeflow models. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1 8 (2008) 10. Gilbert, A., Illingworth, J., Bowden, R.: Scale invariant action recognition using compound features mined from dense spatio-temporal corners. In: European Conference on Computer Vision, pp (2008) 11. Deans, S.: Application of the radon transform. Wiley Interscience Publications, New York (1983) 12. Wang, Y., Huang, K., Tan, T.: Human activity recognition based on r transform. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 1 8 (2007) 13. Tabbone, S., Wendling, L., Salmon, J.: A new shape descriptor defined on the radon transform. Computer Vision and Image Understanding 102, (2006) 14. Reddy, B., Chatterji, B.: An fft-based technique for translation, rotation, and scale-invariant image registration. IEEE Trans. Image Processing 8, (1996) 15. Rubner, Y., Tomasi, C., Guibas, L.: The earth mover s distance as a metric for image retrieval. International Journal of Computer Vision 40, (2000)

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images

Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Simultaneous Vanishing Point Detection and Camera Calibration from Single Images Bo Li, Kun Peng, Xianghua Ying, and Hongbin Zha The Key Lab of Machine Perception (Ministry of Education), Peking University,

More information

An Image Based 3D Reconstruction System for Large Indoor Scenes

An Image Based 3D Reconstruction System for Large Indoor Scenes 36 5 Vol. 36, No. 5 2010 5 ACTA AUTOMATICA SINICA May, 2010 1 1 2 1,,,..,,,,. : 1), ; 2), ; 3),.,,. DOI,,, 10.3724/SP.J.1004.2010.00625 An Image Based 3D Reconstruction System for Large Indoor Scenes ZHANG

More information

View-Invariant Action Recognition Using Fundamental Ratios

View-Invariant Action Recognition Using Fundamental Ratios View-Invariant Action Recognition Using Fundamental Ratios Yuping Shen and Hassan Foroosh Computational Imaging Lab., University of Central Florida, Orlando, FL 3286 http://cil.cs.ucf.edu/ Abstract A moving

More information

Fast trajectory matching using small binary images

Fast trajectory matching using small binary images Title Fast trajectory matching using small binary images Author(s) Zhuo, W; Schnieders, D; Wong, KKY Citation The 3rd International Conference on Multimedia Technology (ICMT 2013), Guangzhou, China, 29

More information

Classifications of images for the recognition of people s behaviors by SIFT and SVM

Classifications of images for the recognition of people s behaviors by SIFT and SVM Classifications of images for the recognition of people s s by SIFT and SVM Henni Sid Ahmed * Department of Electronic Engineering University of Oran Algeria laboratory LSSD USTO Belbachir Mohamed Faouzi

More information

Model-free Viewpoint Invariant Human Activity Recognition

Model-free Viewpoint Invariant Human Activity Recognition Model-free Viewpoint Invariant Human Activity Recognition Zaw Zaw Htike, Simon Egerton, Kuang Ye Chow Abstract The viewpoint assumption is becoming an obstacle in human activity recognition systems. There

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION

More information

Evaluation of Local Space-time Descriptors based on Cuboid Detector in Human Action Recognition

Evaluation of Local Space-time Descriptors based on Cuboid Detector in Human Action Recognition International Journal of Innovation and Applied Studies ISSN 2028-9324 Vol. 9 No. 4 Dec. 2014, pp. 1708-1717 2014 Innovative Space of Scientific Research Journals http://www.ijias.issr-journals.org/ Evaluation

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

EVENT DETECTION AND HUMAN BEHAVIOR RECOGNITION. Ing. Lorenzo Seidenari

EVENT DETECTION AND HUMAN BEHAVIOR RECOGNITION. Ing. Lorenzo Seidenari EVENT DETECTION AND HUMAN BEHAVIOR RECOGNITION Ing. Lorenzo Seidenari e-mail: seidenari@dsi.unifi.it What is an Event? Dictionary.com definition: something that occurs in a certain place during a particular

More information

3D Posture Representation Using Meshless Parameterization with Cylindrical Virtual Boundary

3D Posture Representation Using Meshless Parameterization with Cylindrical Virtual Boundary 3D Posture Representation Using Meshless Parameterization with Cylindrical Virtual Boundary Yunli Lee and Keechul Jung School of Media, College of Information Technology, Soongsil University, Seoul, South

More information

Person identification from spatio-temporal 3D gait

Person identification from spatio-temporal 3D gait 200 International Conference on Emerging Security Technologies Person identification from spatio-temporal 3D gait Yumi Iwashita Ryosuke Baba Koichi Ogawara Ryo Kurazume Information Science and Electrical

More information

Cross-View Action Recognition from Temporal Self-Similarities

Cross-View Action Recognition from Temporal Self-Similarities Appears at ECCV 8 Cross-View Action Recognition from Temporal Self-Similarities Imran N. Junejo, Emilie Dexter, Ivan Laptev and Patrick Pérez INRIA Rennes - Bretagne Atlantique Rennes Cedex - FRANCE Abstract.

More information

Polar Coordinates. Chapter 10: Parametric Equations and Polar coordinates, Section 10.3: Polar coordinates 27 / 45

Polar Coordinates. Chapter 10: Parametric Equations and Polar coordinates, Section 10.3: Polar coordinates 27 / 45 : Given any point P = (x, y) on the plane r stands for the distance from the origin (0, 0). θ stands for the angle from positive x-axis to OP. Polar coordinate: (r, θ) Chapter 10: Parametric Equations

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Chieh-Chih Wang and Ko-Chih Wang Department of Computer Science and Information Engineering Graduate Institute of Networking

More information

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants

Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

HAND-GESTURE BASED FILM RESTORATION

HAND-GESTURE BASED FILM RESTORATION HAND-GESTURE BASED FILM RESTORATION Attila Licsár University of Veszprém, Department of Image Processing and Neurocomputing,H-8200 Veszprém, Egyetem u. 0, Hungary Email: licsara@freemail.hu Tamás Szirányi

More information

APPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R.

APPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R. APPLICATION OF RADON TRANSFORM IN CT IMAGE MATCHING Yufang Cai, Kuan Shen, Jue Wang ICT Research Center of Chongqing University, Chongqing, P.R.China Abstract: When Industrial Computerized Tomography (CT)

More information

Real-time multi-view human action recognition using a wireless camera network

Real-time multi-view human action recognition using a wireless camera network Real-time multi-view human action recognition using a wireless camera network S. Ramagiri, R. Kavi,. Kulathumani Dept. of Computer Science and Electrical Engineering West irginia University Email: {sramagir,

More information

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,

More information

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles Yihong Wu, Haijiang Zhu, Zhanyi Hu, and Fuchao Wu National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

1216 P a g e 2.1 TRANSLATION PARAMETERS ESTIMATION. If f (x, y) F(ξ,η) then. f(x,y)exp[j2π(ξx 0 +ηy 0 )/ N] ) F(ξ- ξ 0,η- η 0 ) and

1216 P a g e 2.1 TRANSLATION PARAMETERS ESTIMATION. If f (x, y) F(ξ,η) then. f(x,y)exp[j2π(ξx 0 +ηy 0 )/ N] ) F(ξ- ξ 0,η- η 0 ) and An Image Stitching System using Featureless Registration and Minimal Blending Ch.Rajesh Kumar *,N.Nikhita *,Santosh Roy *,V.V.S.Murthy ** * (Student Scholar,Department of ECE, K L University, Guntur,AP,India)

More information

Human Action Recognition Using Independent Component Analysis

Human Action Recognition Using Independent Component Analysis Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2 6th International Conference on Electronic, Mechanical, Information and Management (EMIM 2016) Application of Geometry Rectification to Deformed Characters Liqun Wang1, a * and Honghui Fan2 1 School of

More information

Polar Coordinates. Chapter 10: Parametric Equations and Polar coordinates, Section 10.3: Polar coordinates 28 / 46

Polar Coordinates. Chapter 10: Parametric Equations and Polar coordinates, Section 10.3: Polar coordinates 28 / 46 Polar Coordinates Polar Coordinates: Given any point P = (x, y) on the plane r stands for the distance from the origin (0, 0). θ stands for the angle from positive x-axis to OP. Polar coordinate: (r, θ)

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Learning 4D Action Feature Models for Arbitrary View Action Recognition

Learning 4D Action Feature Models for Arbitrary View Action Recognition Learning 4D Action Feature Models for Arbitrary View Action Recognition Pingkun Yan, Saad M. Khan, Mubarak Shah Computer Vision Lab, University of Central Florida, Orlando, FL http://www.eecs.ucf.edu/

More information

Application of Log-polar Coordinate Transform in Image Processing

Application of Log-polar Coordinate Transform in Image Processing International Industrial Informatics and Computer Engineering Conference (IIICEC 2015) Application of Log-polar Coordinate Transform in Image Processing Yuping Feng1, a, Shuang Chen1, b, Xuefeng Liu1,

More information

Human Activity Recognition Using a Dynamic Texture Based Method

Human Activity Recognition Using a Dynamic Texture Based Method Human Activity Recognition Using a Dynamic Texture Based Method Vili Kellokumpu, Guoying Zhao and Matti Pietikäinen Machine Vision Group University of Oulu, P.O. Box 4500, Finland {kello,gyzhao,mkp}@ee.oulu.fi

More information

Video-based Face Recognition Using Earth Mover s Distance

Video-based Face Recognition Using Earth Mover s Distance Video-based Face Recognition Using Earth Mover s Distance Jiangwei Li 1, Yunhong Wang 1,2, and Tieniu Tan 1 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,

More information

WATERMARKING FOR LIGHT FIELD RENDERING 1

WATERMARKING FOR LIGHT FIELD RENDERING 1 ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,

More information

3D Motion Estimation By Evidence Gathering

3D Motion Estimation By Evidence Gathering 1 3D Motion Estimation By Evidence Gathering Anas Abuzaina, Mark S. Nixon, John N. Carter School of Electronics and Computer Science, Faculty of Physical Sciences and Engineering, University of Southampton,

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

View Invariant Movement Recognition by using Adaptive Neural Fuzzy Inference System

View Invariant Movement Recognition by using Adaptive Neural Fuzzy Inference System View Invariant Movement Recognition by using Adaptive Neural Fuzzy Inference System V. Anitha #1, K.S.Ravichandran *2, B. Santhi #3 School of Computing, SASTRA University, Thanjavur-613402, India 1 anithavijay28@gmail.com

More information

Optical Flow Estimation using Fourier Mellin Transform

Optical Flow Estimation using Fourier Mellin Transform Optical Flow Estimation using Fourier Mellin Transform Huy Tho Ho School of EEE, Adelaide University, Australia huy.ho@adelaide.edu.au Roland Goecke RSISE, Australian National University, Australia roland.goecke@anu.edu.au

More information

Planar Symmetry Detection by Random Sampling and Voting Process

Planar Symmetry Detection by Random Sampling and Voting Process Planar Symmetry Detection by Random Sampling and Voting Process Atsushi Imiya, Tomoki Ueno, and Iris Fermin Dept. of IIS, Chiba University, 1-33, Yayo-cho, Inage-ku, Chiba, 263-8522, Japan imiya@ics.tj.chiba-u.ac.jp

More information

BLUR INVARIANT REGISTRATION OF ROTATED, SCALED AND SHIFTED IMAGES

BLUR INVARIANT REGISTRATION OF ROTATED, SCALED AND SHIFTED IMAGES BLUR INVARIANT REGISTRATION OF ROTATED, SCALED AND SHIFTED IMAGES Ville Ojansivu and Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering University of Oulu, PO Box

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Evaluation of Different Metrics for Shape Based Image Retrieval Using a New Contour Points Descriptor

Evaluation of Different Metrics for Shape Based Image Retrieval Using a New Contour Points Descriptor Evaluation of Different Metrics for Shape Based Image Retrieval Using a New Contour Points Descriptor María-Teresa García Ordás, Enrique Alegre, Oscar García-Olalla, Diego García-Ordás University of León.

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer

Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Maryam Vafadar and Alireza Behrad Faculty of Engineering, Shahed University Tehran,

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Improving Recognition through Object Sub-categorization

Improving Recognition through Object Sub-categorization Improving Recognition through Object Sub-categorization Al Mansur and Yoshinori Kuno Graduate School of Science and Engineering, Saitama University, 255 Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama 338-8570,

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

A Partial Curve Matching Method for Automatic Reassembly of 2D Fragments

A Partial Curve Matching Method for Automatic Reassembly of 2D Fragments A Partial Curve Matching Method for Automatic Reassembly of 2D Fragments Liangjia Zhu 1, Zongtan Zhou 1, Jingwei Zhang 2,andDewenHu 1 1 Department of Automatic Control, College of Mechatronics and Automation,

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth Images

A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth Images 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013. Tokyo, Japan A Novel Hand Posture Recognition System Based on Sparse Representation Using Color and Depth

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

A Hybrid Approach to Detect Graphical Symbols in Documents

A Hybrid Approach to Detect Graphical Symbols in Documents A Hybrid Approach to Detect Graphical Symbols in Documents Salvatore Tabbone, Laurent Wendling, and Daniel Zuwala Loria-UMR 7503, Campus Scientifique, B.P. 239, 54506 Villers-lès-Nancy Cedex, France {tabbone,wendling,dzuwala}@loria.fr

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

A second order algorithm for orthogonal projection onto curves and surfaces

A second order algorithm for orthogonal projection onto curves and surfaces A second order algorithm for orthogonal projection onto curves and surfaces Shi-min Hu and Johannes Wallner Dept. of Computer Science and Technology, Tsinghua University, Beijing, China shimin@tsinghua.edu.cn;

More information

Recognition of Human Body Movements Trajectory Based on the Three-dimensional Depth Data

Recognition of Human Body Movements Trajectory Based on the Three-dimensional Depth Data Preprints of the 19th World Congress The International Federation of Automatic Control Recognition of Human Body s Trajectory Based on the Three-dimensional Depth Data Zheng Chang Qing Shen Xiaojuan Ban

More information

Raghuraman Gopalan Center for Automation Research University of Maryland, College Park

Raghuraman Gopalan Center for Automation Research University of Maryland, College Park 2D Shape Matching (and Object Recognition) Raghuraman Gopalan Center for Automation Research University of Maryland, College Park 1 Outline What is a shape? Part 1: Matching/ Recognition Shape contexts

More information

Double Integrals, Iterated Integrals, Cross-sections

Double Integrals, Iterated Integrals, Cross-sections Chapter 14 Multiple Integrals 1 ouble Integrals, Iterated Integrals, Cross-sections 2 ouble Integrals over more general regions, efinition, Evaluation of ouble Integrals, Properties of ouble Integrals

More information

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Chan-Su Lee and Ahmed Elgammal Rutgers University, Piscataway, NJ, USA {chansu, elgammal}@cs.rutgers.edu Abstract. We

More information

Clustering of Human Actions using Invariant Body Shape Descriptor and Dynamic Time Warping

Clustering of Human Actions using Invariant Body Shape Descriptor and Dynamic Time Warping Clustering of Human Actions using Invariant Body Shape Descriptor and Dynamic Time Warping Massimiliano Pierobon, Marco Marcon, Augusto Sarti and Stefano Tubaro Image and Sound Processing Group Dipartimento

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle

3D from Photographs: Automatic Matching of Images. Dr Francesco Banterle 3D from Photographs: Automatic Matching of Images Dr Francesco Banterle francesco.banterle@isti.cnr.it 3D from Photographs Automatic Matching of Images Camera Calibration Photographs Surface Reconstruction

More information

STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences

STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences Antonio W. Vieira 1,2,EricksonR.Nascimento 1, Gabriel L. Oliveira 1, Zicheng Liu 3,andMarioF.M.Campos 1, 1 DCC - Universidade

More information

Automatic Gait Recognition. - Karthik Sridharan

Automatic Gait Recognition. - Karthik Sridharan Automatic Gait Recognition - Karthik Sridharan Gait as a Biometric Gait A person s manner of walking Webster Definition It is a non-contact, unobtrusive, perceivable at a distance and hard to disguise

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

Topological Mapping. Discrete Bayes Filter

Topological Mapping. Discrete Bayes Filter Topological Mapping Discrete Bayes Filter Vision Based Localization Given a image(s) acquired by moving camera determine the robot s location and pose? Towards localization without odometry What can be

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

ActivityRepresentationUsing3DShapeModels

ActivityRepresentationUsing3DShapeModels ActivityRepresentationUsing3DShapeModels AmitK.Roy-Chowdhury RamaChellappa UmutAkdemir University of California University of Maryland University of Maryland Riverside, CA 9252 College Park, MD 274 College

More information

Gait analysis for person recognition using principal component analysis and support vector machines

Gait analysis for person recognition using principal component analysis and support vector machines Gait analysis for person recognition using principal component analysis and support vector machines O V Strukova 1, LV Shiripova 1 and E V Myasnikov 1 1 Samara National Research University, Moskovskoe

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Multi-view Surface Inspection Using a Rotating Table

Multi-view Surface Inspection Using a Rotating Table https://doi.org/10.2352/issn.2470-1173.2018.09.iriacv-278 2018, Society for Imaging Science and Technology Multi-view Surface Inspection Using a Rotating Table Tomoya Kaichi, Shohei Mori, Hideo Saito,

More information

A WATERMARKING METHOD RESISTANT TO GEOMETRIC ATTACKS

A WATERMARKING METHOD RESISTANT TO GEOMETRIC ATTACKS A WATERMARKING METHOD RESISTANT TO GEOMETRIC ATTACKS D. Simitopoulos, D. Koutsonanos and M.G. Strintzis Informatics and Telematics Institute Thermi-Thessaloniki, Greece. Abstract In this paper, a novel

More information

FFTs in Graphics and Vision. Invariance of Shape Descriptors

FFTs in Graphics and Vision. Invariance of Shape Descriptors FFTs in Graphics and Vision Invariance of Shape Descriptors 1 Outline Math Overview Translation and Rotation Invariance The 0 th Order Frequency Component Shape Descriptors Invariance 2 Translation Invariance

More information

Fundamental Matrices from Moving Objects Using Line Motion Barcodes

Fundamental Matrices from Moving Objects Using Line Motion Barcodes Fundamental Matrices from Moving Objects Using Line Motion Barcodes Yoni Kasten (B), Gil Ben-Artzi, Shmuel Peleg, and Michael Werman School of Computer Science and Engineering, The Hebrew University of

More information

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya

More information

Fingerprint Ridge Distance Estimation: Algorithms and the Performance*

Fingerprint Ridge Distance Estimation: Algorithms and the Performance* Fingerprint Ridge Distance Estimation: Algorithms and the Performance* Xiaosi Zhan, Zhaocai Sun, Yilong Yin, and Yayun Chu Computer Department, Fuyan Normal College, 3603, Fuyang, China xiaoszhan@63.net,

More information

Tracking and Recognizing People in Colour using the Earth Mover s Distance

Tracking and Recognizing People in Colour using the Earth Mover s Distance Tracking and Recognizing People in Colour using the Earth Mover s Distance DANIEL WOJTASZEK, ROBERT LAGANIÈRE S.I.T.E. University of Ottawa, Ottawa, Ontario, Canada K1N 6N5 danielw@site.uottawa.ca, laganier@site.uottawa.ca

More information

Trade-off Between Computational Complexity and Accuracy in Evolutionary Image Feature Extraction

Trade-off Between Computational Complexity and Accuracy in Evolutionary Image Feature Extraction Trade-off Between Computational Complexity and Accuracy in Evolutionary Image Feature Extraction Wissam A. Albukhanajer, Yaochu Jin and Johann A. Briffa Wissam A. Albukhanajer (student) E: w.albukhanajer@surrey.ac.uk

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Object and Action Detection from a Single Example

Object and Action Detection from a Single Example Object and Action Detection from a Single Example Peyman Milanfar* EE Department University of California, Santa Cruz *Joint work with Hae Jong Seo AFOSR Program Review, June 4-5, 29 Take a look at this:

More information

A Semi-Automatic 2D-to-3D Video Conversion with Adaptive Key-Frame Selection

A Semi-Automatic 2D-to-3D Video Conversion with Adaptive Key-Frame Selection A Semi-Automatic 2D-to-3D Video Conversion with Adaptive Key-Frame Selection Kuanyu Ju and Hongkai Xiong Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China ABSTRACT To

More information

Estimating normal vectors and curvatures by centroid weights

Estimating normal vectors and curvatures by centroid weights Computer Aided Geometric Design 21 (2004) 447 458 www.elsevier.com/locate/cagd Estimating normal vectors and curvatures by centroid weights Sheng-Gwo Chen, Jyh-Yang Wu Department of Mathematics, National

More information

Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking

Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking Angel D. Sappa Niki Aifanti Sotiris Malassiotis Michael G. Strintzis Computer Vision Center Informatics & Telematics

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information

Camera Self-calibration Based on the Vanishing Points*

Camera Self-calibration Based on the Vanishing Points* Camera Self-calibration Based on the Vanishing Points* Dongsheng Chang 1, Kuanquan Wang 2, and Lianqing Wang 1,2 1 School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001,

More information

STRUCTURAL ICP ALGORITHM FOR POSE ESTIMATION BASED ON LOCAL FEATURES

STRUCTURAL ICP ALGORITHM FOR POSE ESTIMATION BASED ON LOCAL FEATURES STRUCTURAL ICP ALGORITHM FOR POSE ESTIMATION BASED ON LOCAL FEATURES Marco A. Chavarria, Gerald Sommer Cognitive Systems Group. Christian-Albrechts-University of Kiel, D-2498 Kiel, Germany {mc,gs}@ks.informatik.uni-kiel.de

More information

Retrieval of 3D-position of a Passive Object Using Infrared LED s and Photodiodes Christensen, Henrik Vie

Retrieval of 3D-position of a Passive Object Using Infrared LED s and Photodiodes Christensen, Henrik Vie Aalborg Universitet Retrieval of 3D-position of a Passive Object Using Infrared LED s and Photodiodes Christensen, Henrik Vie Publication date: 24 Document Version Publisher's PDF, also known as Version

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Representation and Description 2 Representation and

More information