Output only based modal analysis of a reduced scale building using digital camera images

Similar documents
Measurements using three-dimensional product imaging

Ch 22 Inspection Technologies

Transactions on Modelling and Simulation vol 10, 1995 WIT Press, ISSN X

DEVELOPMENT OF A PARAMETRIC PROGRAM FOR SIMULATION OF THE STAMPING PROCESS USING FINITE ELEMENTS METHOD

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Auto-focusing Technique in a Projector-Camera System

Introducing Robotics Vision System to a Manufacturing Robotics Course

EE 584 MACHINE VISION

Tracking of Human Body using Multiple Predictors

Experimental Fault Diagnosis in Systems Containing Finite Elements of Plate of Kirchoff by Using State Observers Methodology

An Image Based Approach to Compute Object Distance

Development of system and algorithm for evaluating defect level in architectural work

Short Survey on Static Hand Gesture Recognition

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

COMPUTER-BASED WORKPIECE DETECTION ON CNC MILLING MACHINE TOOLS USING OPTICAL CAMERA AND NEURAL NETWORKS

Understanding Tracking and StroMotion of Soccer Ball

Draft SPOTS Standard Part III (7)

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Low Cost Motion Capture

arxiv: v1 [cs.cv] 28 Sep 2018

Dominant plane detection using optical flow and Independent Component Analysis

Automatic Modelling Image Represented Objects Using a Statistic Based Approach

Eye tracking by image processing for helping disabled people. Alireza Rahimpour

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

Autonomous Navigation for Flying Robots

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

Watchmaker precision for robotic placement of automobile body parts

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Robot vision review. Martin Jagersand

Real time game field limits recognition for robot self-localization using collinearity in Middle-Size RoboCup Soccer

Using Optical Flow for Stabilizing Image Sequences. Peter O Donovan

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

A Statistical Consistency Check for the Space Carving Algorithm.

Sample Sizes: up to 1 X1 X 1/4. Scanners: 50 X 50 X 17 microns and 15 X 15 X 7 microns

Multiple View Geometry

Flexible Calibration of a Portable Structured Light System through Surface Plane

Feature Tracking and Optical Flow

APPLICATION OF FLOYD-WARSHALL LABELLING TECHNIQUE: IDENTIFICATION OF CONNECTED PIXEL COMPONENTS IN BINARY IMAGE. Hyunkyung Shin and Joong Sang Shin

Shape Descriptor using Polar Plot for Shape Recognition.

Robotics Programming Laboratory

Detecting motion by means of 2D and 3D information

Chapter 9 Object Tracking an Overview

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

Detection of a Single Hand Shape in the Foreground of Still Images

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Measurement of Pedestrian Groups Using Subtraction Stereo

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Motion Estimation for Video Coding Standards

MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES

ANALYTICAL MODEL FOR THIN PLATE DYNAMICS

Motion Detection Algorithm

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

A Vertex Chain Code Approach for Image Recognition

Range Sensors (time of flight) (1)

Car tracking in tunnels

Optical flow and tracking

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration

Precise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting

Computer Vision. Introduction

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Towards the completion of assignment 1

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

Hand Gesture Recognition System

Transducers and Transducer Calibration GENERAL MEASUREMENT SYSTEM

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR FOR ROBOT NAVIGATION

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Feature Tracking and Optical Flow

I. INTRODUCTION. Figure-1 Basic block of text analysis

Local Image Registration: An Adaptive Filtering Framework

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

CS4670: Computer Vision

Using temporal seeding to constrain the disparity search range in stereo matching

Vehicle Logo Recognition using Image Matching and Textural Features

ACCURACY ANALYSIS FOR NEW CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS

Kanade Lucas Tomasi Tracking (KLT tracker)

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Method of Background Subtraction for Medical Image Segmentation

Miniature faking. In close-up photo, the depth of field is limited.

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Moving Object Tracking in Video Using MATLAB

Available online at ScienceDirect. Marko Švaco*, Bojan Šekoranja, Filip Šuligoj, Bojan Jerbić

Available online at ScienceDirect. Procedia Computer Science 22 (2013 )

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Automatic Privileged Vehicle Passing System using Image Processing

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

PHOTOGRAMMETRIC TECHNIQUE FOR TEETH OCCLUSION ANALYSIS IN DENTISTRY

Detection of Edges Using Mathematical Morphological Operators

Tracking Trajectories of Migrating Birds Around a Skyscraper

Transcription:

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 Porto, Portugal, 30 June - July 04 A. Cunha, E. Caetano, P. Ribeiro, G. Müller (eds.) ISSN: 3-900; ISBN: 978-97-75-65-4 Output only based modal analysis of a reduced scale building using digital camera images Danilo Damasceno Sabino, João Antonio Pereira, Gustavo Luiz C. M. Abreu Faculdade de Engenharia de Ilha Solteira, UNESP Univ Estadual de São Paulo, Department of Mechanical Engineering, Ilha Solteira, Brazil. email: danilosabino@hotmail.com, japererira@dem.feis.unesp.br, gustavo@dem.feis.unesp.br ABSTRACT: The article discusses a proposal of displacement measurement using a unique digital camera aiming at to exploit its feasibility for Modal Analysis applications. The proposal discusses a non-contact measuring approach able to measure multiple points simultaneously by using a unique digital camera. A modal analysis of a reduced scale lab building structure based only at responses of structure measured with camera is presented. It focuses at feasibility of using a simple ordinary camera for performing output only modal analysis of structures and its advantage. The modal parameters of structure are estimated from camera data and also by using ordinary experimental modal analysis based on Frequency Response Function (FRF) obtained by using usual sensors like accelerometer and force cell. The comparison of both analysis showed that technique is promising noncontact measuring tool relatively simple and effective to be used in structural modal analysis KEY WORDS: Image processing; Output only modal analysis, Non-contact. INTRODUCTION Vibration analysis of structures is a common task in many areas of engineering and re is a variety of techniques using different types of sensors (contact or non-contact), however when it is desired to measure simultaneously vibration of a large set of points it is still difficult and sometimes a very expensive task. This occurs because most sensors measure only a point and number of available sensors usually is limited. The use of many contact sensors toger can also interfere in some way on behavior of structure, mainly, in small or microstructures which suggest use of non-contact sensors. The vibration measurement by using non-contact sensors as ultrasonic sensors [-] or laser sensor [3] are among techniques and methods more commonly used, however, for structural vibration application and modal analysis, y are still expensive. Actually, use of digital images (CCD or CMOS sensors) as non-contact sensor for measuring of vibration is becoming an available technology, which is getting more and more space in different engineering applications. The use of CCD and CMOS sensors can be an interesting option for measuring of a set of vibration points simultaneously, since y allow one to obtain a picture of whole set of points of interest. The vibration movement of system is captured by a video camera during measuring process and sequence of frames of video, when properly processed and analyzed, provides necessary information to reconstitute vibration movement of whole set of points. The methods of displacement measurement using digital cameras usually require two cameras positioned at different angles to capture displacement of a target object. The corresponding registered images of movement of target point are processed aiming at to extract some useful features that allow to displacement of target point registered in images. In case of structural vibration, movement of target point or set of points is generally obtained by processing a video of vibrating structure, frame-by-frame, and using correlation analysis techniques and pattern recognition to extract characteristics and parameters that allow to estimate movement of target set of points on images. These extracted parameters are used to calculate vibration of target set of points [4]. An alternative approach to use of two cameras for displacement measurement has been discussed by authors [5-6]. This is an extension of a proposal of using measuring distance of a target object using a unique camera discussed by Hsu et al. [7]. In this case a video of movement of system is analyzed, frame-by-frame, and change of positions of those pixels of a set of points on target object are determined and use to calculate movement of points. This paper discusses usefulness of approach for application in experimental structural modal analysis. The structural modal analysis using digital cameras is already an available technology [8] but cost is still relatively high, making it use limited for only specialized and high technology companies. Actually this technology is prohibitive for basic applications mainly due to costs of cameras and ir accessories that are very special cameras which make ir access limited. The modal analysis of a reduced scale lab building structure based only at responses of structure measured with camera is presented. The article focuses at feasibility of using a simple ordinary camera for performing output only modal analysis of structures and its advantage. The modal parameters of structure are estimated from camera data 85

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 and also by using ordinary experimental modal analysis based on Frequency Response Function (FRF) obtained by using usual sensors like accelerometer and force cell. DISPLACEMENT MEASUREMENT USING A DIGITAL CAMERA The displacement measurement systems are based on change of position of pixels of a target object on images captured by digital video camera. The formulation of approach is developed based in triangular relationship between image and object. In this case, assuming that object is perpendicular to optical axis of camera, re is a direct relationship between object size and distance of camera to object that will be related with number of pixels of object. The distance of object to camera can be obtained moving camera toward/forward to object and counting variation of number of pixels of object from one image to or when camera moves. Figure illustrates a schematic diagram of a CCD camera to capture images of an object of length l for two different shooting distances, h and h. Figure. Schematic diagram of cameraa positioning. Assuming that object is perpendicularr to optical axis of camera, re is a direct relationship between size of object, l, and displacement of camera ( h) from a shoot position to or. Consequently, re will be a variation in number of pixel of target object in image. Due to position of object, number of pixels to represent size of object on a shooting at position h, called N(h ), is a function of distance h and number of pixels to represent object at position h, called N(h ) is a function h. Taking it into account and know distance h, one will find an estimating of values of distances h and h, Eq. () and Eq. (). The optical distance (h s s) is obtained directly from camera data sheet or it can be estimated as discussed in [5]. h h Once determined distance from camera to object, it is possible to estimate actual size of object, and thus establish a relationship between size and number of pixels of object, Eq. (3). Δ d = tan ( ( ) ( ( ) N h = N h + N h N h = h N h + N h Δ h s ( θ )( h * N max ) ( ) ) ( ) + hs Δh h where h * is estimated value of distance h or h, θ is angle of vision of camera and N max is maximum number of pixels of a row (column) of image.. Image Processing and Parameters Extraction The image processing step aims to improve quality of image aiming at enhancement of target object in image to identify and have a clear definition of object from rest of image scene. In this case, captured images are processed using computational tools as thresholding, erosion and dilation. Thresholding is simplest method of image segmentation that allows one to create a binary image through a definition of thresholding levell [9]. The specific defined level, depending of contrast and color of target object, it will permits to separatee object from rest of scene. Dilation and erosion are also used, se tools are most basic morphological operations for border detection. Dilation adds pixels to boundaries of an object in an image, while erosion removes pixels from object boundaries. The basic idea in binary morphological operations is to provide an image with a simple, pre-defined shape, drawing conclusions on how this shape fits or misses shapes of object in image. The number of pixels added or removed from objects in an image depends on size and shape of structuring element used to process image [0]. The definition of parameters of camera and obtaining of relation of pixel for unit of measurement is defined from preliminary tests. For that, camera is located properly in position h and an image of calibration target object is captured for position h and later same image is captured for camera at position h. These images are properly processed and target object is identified and separated of rest of scene, Figure. ) h s unit pixel () () (3) 86

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 (a) (b) (c) (d) Figure. Image of target object (a) at position h (b) and at position h, (c) processed image at position h (d) and at position h. Once identifiedd target object, number of pixels of object is counted for position h and h and relation units/pixels is calculated according to Eq. ( 3). The position of each target points (columns and rows) is defined in terms of number and position of pixels and later y are converted to measuring units using estimated parameters of camera. 3 TRACKING TECHNIQUES According Javed et al. [], point detectors are used to locate points of interest in images that have an expressive texture in ir respective localities. Points of interest have long been used in context of movement, in stereo vision and tracking problems. A desirable quality of a point of interest is its invariance to changes in illumination and viewpoint of camera. In literature, detectors commonly used of points of interest are KLT, SIFT and Kalman Filter. Kanade- Lucas-Tomasi (KLT): proposes a criterion for selection of aspects that is great for construction because according to working methods of tracking and monitoring methods of aspects that can detect occlusions and features that do not match parts of world []. SIFT: Invariant Feature Transform is an approach that transforms an image in a large collection of vectors of local features, each of which is invariant to image translation, scaling and rotation, and partially invariant to change in illumination and 3D projections or related [3]. Kalman Filter: It is essentially a set of mamatical equations that implements by recursion an estimator of predictor-corrector. The filter is robust in several aspects: it supports estimations of past, present and even future states, and can do it even when precise nature of modeled system is unknown [4]. 3. Identifying centers of targets Identified target object, position of target object is defined in terms of number and position of pixels (columns and rows) of object and later y are converted to measuring units using estimated parameters of camera. The identification of position of each object in binary image (black and white) is made using a D contour box. The contour box is applied aiming at centralize object and to define position of m from position of its center. In orderr to define contour box of object is made sweep of pixels of image in box. The sweep is carried through of left for right and from top to bottom, when one pixel assuming value of 0 (black color) is found, n this position (X, Y) is marked as being beginning of object, and when next coming pixel in same line becomes (white color) means that pixel at previous position is end of object. So, this previous position is marked as illustrated in Figure 3. In case it is foundedd beginning of or edge at same line before ending of line, this position is marked as beginning of a new object and so on. Figure 3. Finding targets through edges. After definition of edges of objects, it can be observed corresponding shapes of m. A box is created to identify its center geometric, four vertices, P, P, P3 and P4 are found, being m bigger and lower values of coordinates X and Y, that is, P,i (Xmin, Ymin), P,i (Xmax, Ymin), P 3,i (Xmax, Ymax) and P 4,i (Xmin, Ymax), wheree index i represent object (ii =,,, number of target objects), Figure 4. Figure 4. D contour box. One simplified form of to get position of center of target object is to calculate average value between vertices, as it is illustrated in Figure 5. In this case that coordinates, in terms of pixels, of position of center of target object can be calculate by Eq. (4). Figure 5. Center of target object definedd for crossing lines. 87

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 P i X + X Y +Y ( X, Y ) =, max min max min Once identified center of position of target object, this position is compared with found positions of targets objects of previous frame and correlated between yourself. When correlation is less that a stipulated value, it means that target object found is not corresponds to compared target object in previous frame. However, if this value is greater than stipulated n targets of both frames are corresponding. 4 OUTPUT ONLY MODAL ANALYSIS USING A DIGITAL CAMERA Structural modeling is an important step for high performance and reliable operation of structures and equipments. Usually, identification of dynamic parameters of a model is done through classical experimental modal tests, which are based in input-output relationship of model or, alternatively, based only in response of model, called output only based modal analysis. Unlike classical modal analysis, output only based modal analysis allows one to obtain modal parameters of model without measuring input forces actuating at on model. The latter case it is much more attractive, since modal parameters of model can be estimated using only responses, avoiding difficulties and limitation to measure excitation of model. The proposed approach of using only a digital camera for displacement measurement fit requirements of output only modal analysis and ir use appears as a promising option to study and evaluate structural dynamic behavior of models in laboratory or even in real operation conditions. In this case, proposed approach was used to obtain displacement responses of a structure and modal parameters of model were estimated using a proper implemented output only based algorithms [5]. 4. Reduced scale building structure analyzed In this section capability of digital camera measurement displacement is discussed aiming at evaluating its functionality for structural modal analysis applications. A two-floor controlled building lab structure was used for tests. The building structure is assembled on a moving base, which was coupled a mini-shaker, in order to receive excitation force and vibrates according form of applied force. Initially displacement of structure (two-floor building) was measured and whole structure was measured at different points and used to estimate modal parameters of model. The results were compared with data obtained with use of conventional sensors (accelerometers). 4. Experimental apparatus The measurement system includes a digital camera placed in front of building structure, a tripod, a mini-shaker to excite structure and corresponding acquisition system and corresponding sensors (accelerometers and (4) force cell) used to obtain FRF(s) for ordinary experimental analysis. The structure is a two-floor controlled building lab structure manufactured by Quanser Consulting Inc. It has 5 mm in height, with each column being steel with a section of.75 08 mm. The total mass of structure is 4.5 kg. In this case it was not used control system structure and condition of excitation of building consisted essentially of excitation signal provided by a mini-shaker positioned on base of building. The structure was excited in a point and responses at different points were measured by using camera. Figure 6 shows schematically experimental set-up, it was defined a set of 0 measuring points, that received adhesive labels aiming at facilitating ir definition in whole image during image processing step. Each column received 5 labels at back part of column and 5 labels at front part, totalizing 0 measuring points. Figure 6. Experimental Set Up of measuring vibration of a two-floor building to measure of camera. The image acquisition rate used was 0 fps at a resolution of 640x480 and camera was triggered to initiate recording according to its excitation. The excitation signal inputted to shake-table was a random type from 0 to 40 Hz provide by mine-shaker, Figure 7. The responses were all measured in same direction of excitation. 88

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 Figure 7. Mine Shaker and load cell. The data used to provide ordinary modal analysis were also measured in same test that was used camera. It was used a system of acquisition from Vibpilot MP, five ICP mini accelerometers and force cell, both from PCB. The structure was instrumented with accelerometers and force cell before test such that data for both analyses were measured at same condition. 4.3 Image processing and measuring signals After acquisition of video, image processing step allowed obtaining position of target points in image for each frame and corresponding pixels number. The image was cropped at region of interest in order to eliminate undesired points or components that can appear in scene that are not of interest. This decrease processing time and facilitate identification of target points in image. Figure 8a and 8b shows respectively image of whole scene and cropped one. After defined region of interest threshold, erosion and dilation tools [9] were used to separate target objects from rest of scene and position of centroid of each target point for each frame was calculated. The changing of its position from reference frame was calculated in terms of pixels to be used to calculate corresponding displacement of each target points (adhesive labels). (a) (c) (d) Figure 9. Threshold - (a), Erosion (b), Dilation (c) and calculating centroids - (d). Once defined position of each target point and knowing value of correction factor Δd (section ) it is possible to obtain displacements of targets in corresponding measuring unit. The displacement of each point of measuring mesh was estimated processing video images of movement of structure. The displacement signals for all points represents movement of structure in horizontal direction. In Figure 0 is presented map of measuring points and in Figure it is shown measured displacement signals of each measured point of building. (b) (a) (b) Figure 8. Original image - (a) and cropped image in region of interest (b). In Figure 9 it shown some step of image processing stage to identify and separate target point from rest of scene according to employed tools. 89

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 Magnitude (m/s²) 0-0 0 0 30 40 Time (s) 50 60 70 Power Spectrum Magnitude (db) 0 0 0-0 -0-30 0 0 0 30 40 Frequency (Hz) 50 60 70 Figure 0. Schematic diagram of camera positioning. Figure. Interface gráfica de parâmetros de entrada utilizados na SSI. Defined main input parameters of algorithm, it begins processs of identifying modal parameters. In Figure 3 is shown graphical interface that showss diagram stabilization used to identification and separation of real modes of computational one. Amplitude (m) 0 5-0 0 4-5 05 3-5 05-5 05-5 05 0 50 0 9 8 7 6 Time (s) 5 4 3 0 9 8 7 6 Figure. Schematic diagram of camera positioning. 4.4 Modal Analysis of Building Structure This section discusses modal analysis of structure based on measured responses when structure was excited with random signal with energy from 0 to 40 Hz. The parameters of model were estimated using SSI algorithm implemented in software OEMA [6]. The whole measured output matrix was used to estimate first three frequencies and mode shapes of model. The used software of identification was developedd and implemented following a modular systematic, with graphical interfaces. The various graphical interfaces contains several tools to aid user, which allow selecting cutoff frequencies, filter and decimate signal, enter range of orders used in stabilization diagram and ors. In Figure is shown graphical interface in which user can observe and analyze original signals or even perform some pre-processing, decimate, filter and set parameters of algorithm. Figure 3. Diagram stabilization. The parameters of model were estimated by using only responses of structure using SSI algorithm, in analysis it was estimated first three frequencies and mode shapes of model. The natural frequencies are presented at Table and first three modes shapess are shown in Figure 4. Table. Natural frequencies of structure obtained with camera. Modes 3 Freq(Hz), 5,7 33,4 90

Proceedings of 9th International Conference on Structural Dynamics, EURODYN 04 Figure 4. Modes shape of reduced scale building. The modal parameters of model were also estimated using FRF(s) obtained by from acquisition system. In ordinary modal analysis parameter were estimated using Ibrahim method also implemented in software OEMA. The natural frequencies are presented at Table. Table. Natural frequencies of structure obtained with accelerometers. Modes 3 Freq(Hz),8 5,5 34,7 The results of two measurement methods were compared. In this case, natural frequencies can be compared directly and modes shape can be compared by value of MAC (correlation signals). In Table 3 values identified by two methods are presented. Table 3. Natural frequencies of structure obtained with camera. Camera-Freq (Hz) Accelerometers-Freq (Hz) MAC 3, 5,7 33,4,8 5,5 34,7 0.98 0.95 0.96 REFERENCES [] [] [3] [4] [5] [6] [7] [8] [9] [0] [] [] [3] [4] [5] [6] Carullo, A. and Parvis, M., An Ultrasonic Sensor for Distance Measurement in Automotive Applications. IEEE Sensors Journal. Volume. Number, p. 43-47, 00. Song, K. T. and Tang, W. H., Environment Perception for a Mobil Robot Using Double Ultrasonic Sensors and a CCD Camera. IEEE Trans. on Industrial Electronics, Volume 43, Number 3, p. 37-379, 996. Svirdov, S. A. and Sterlyagov, M. S., Sea surface slope statistics measured by laser sensor. Proceedings of oceans engineering for today's technology and tomorrow's preservation. Volume. p. 900, 994. Sutton, M. A. et al., Image Correlation for Shape, Motion and Deformation Measurements - Basic Concepts, Theory and Applications. New York: Springer, 009. Lima, E. A. et al, Displacement measurement using digital camera images. VII National congress of mechanical engineering CONEM0. São Luís, Brazil, 0. Sabino, D. D. et al, Measurements of displacement of structure of a reduced scale building lab using digital camera images. nd International Congress of Mechanical Engineering COBEM 03, Ribeirão Preto, Brazil, 03. Hsu, C. C., Distance Measurement Based on Pixel Variation of CCD Images. In: ISA Transactions, n. 48, p. 389-395, 009 Peeters, B. et al., Experimental modal analysis using camera displacement measurements: a feasibility study. Sixth International Conference on Vibration Measurements by Laser Techniques. Bellingham, USA, 004. Gonzales, R. C. and Woods, R. E., Digital Image Processing. Prentice Hall, 00. Shapiro, L. G. and Stockman, G. C., Computer Vision. Prentice Hall, 00. Javed, O. et al, Object Tracking: A Survey. ACM Computing Surveys, Vol. 38, No. 4, Article 3, 006. Tomasi, C. and Shi, J., Good Features to Track.. IEEE Conference on Computer Vision and Pattern Recognition. Seattle, 994. Lowe, D. G., Object Recognitionn from Local Scale-Invariant Features. The Proceedings of Seventh IEEE International Conference, Vol., pp. 50-57, 999. Bishop, G. and Welch, G., An Introduction to Kalman Filter. University of North Carolina at Chapel Hill, Copyright ACM, Inc, 00. Freitas, T. C. and Pereira, J. A., Análise modal experimental de uma estrutura do tipo frame utilizando apenas dados de resposta, V National Congress of Mechanical Engineering CONEM 008, Salvador, Brazil, 008. OEMA Operational & Modal Analysis Software. Developed at Departamento de Engenharia Mecânica, UNESP, Faculdade de Engenharia Mecânica de Ilha Solteira by Prof. Dr. Pereira, J. A. The parameters estimated by using output only response measured using digital camera are in same order of those estimated from FRF(s) measured with usual modal analysis sensors. FINAL REMARKS The article discusses a proposal of displacement measurement using a unique digital camera aiming at to exploit its application in Modal Analysis area. It compares output only modal analysis results with results of ordinary FRF based modal analysis using accelerometers. The results obtained using output only response measured using digital camera are equivalent to those estimated from FRF(s) measured with usual modal analysis sensors. So, technique has been shown a promising noncontact measuring tool relatively simple and effective to be used in structural modal analysis. 9