Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

Similar documents
Stereo Image Rectification for Simple Panoramic Image Generation

INCREMENTAL DISPLACEMENT ESTIMATION METHOD FOR VISUALLY SERVOED PARIED STRUCTURED LIGHT SYSTEM (ViSP)

Pattern Feature Detection for Camera Calibration Using Circular Sample

Vision Review: Image Formation. Course web page:

Product information. Hi-Tech Electronics Pte Ltd

Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Efficient Stereo Image Rectification Method Using Horizontal Baseline

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Fundamental Matrix & Structure from Motion

Fundamental Matrix & Structure from Motion

Compositing a bird's eye view mosaic

Dominant plane detection using optical flow and Independent Component Analysis

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS

Geometric camera models and calibration

arxiv: v1 [cs.cv] 28 Sep 2018

Using Web Camera Technology to Monitor Steel Construction

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Tracking facial features using low resolution and low fps cameras under variable light conditions

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Seismic structural displacement measurement using a high-speed line-scan camera: experimental validation

Reduced Image Noise on Shape Recognition Using Singular Value Decomposition for Pick and Place Robotic Systems

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Self-calibration of a pair of stereo cameras in general position

Flexible Calibration of a Portable Structured Light System through Surface Plane

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Measurement of Pedestrian Groups Using Subtraction Stereo

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Lucas-Kanade Image Registration Using Camera Parameters

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

A Factorization Method for Structure from Planar Motion

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Non-invasive Subpixel Method for Frequency Measurements Based on Image

Tracking Trajectories of Migrating Birds Around a Skyscraper

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Study on Gear Chamfering Method based on Vision Measurement

calibrated coordinates Linear transformation pixel coordinates

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

A Study of Medical Image Analysis System

AUTOMATIC RECTIFICATION OF LONG IMAGE SEQUENCES. Kenji Okuma, James J. Little, David G. Lowe

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

Lecture 9: Epipolar Geometry

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Two-view geometry Computer Vision Spring 2018, Lecture 10

IMPACT OF SUBPIXEL PARADIGM ON DETERMINATION OF 3D POSITION FROM 2D IMAGE PAIR Lukas Sroba, Rudolf Ravas

Equipment Site Description

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Reminder: Lecture 20: The Eight-Point Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches

Projector Calibration for Pattern Projection Systems

Epipolar Geometry and Stereo Vision

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

Robot localization method based on visual features and their geometric relationship

THE POSITION AND ORIENTATION MEASUREMENT OF GONDOLA USING A VISUAL CAMERA

An Algorithm for 3D Vibration Measurement Using One Laser Scanning Vibrometer

Model-Based Human Motion Capture from Monocular Video Sequences

Photometric Stereo with Auto-Radiometric Calibration

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Epipolar Geometry and the Essential Matrix

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

CLUTCHING AND LAYER-SWITCHING: INTERACTION TECHNIQUES FOR PROJECTION-PHONE

Absolute Scale Structure from Motion Using a Refractive Plate

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

CSE 252B: Computer Vision II

Epipolar geometry-based ego-localization using an in-vehicle monocular camera

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Week 2: Two-View Geometry. Padua Summer 08 Frank Dellaert

Two-View Geometry (Course 23, Lecture D)

Lecture 5 Epipolar Geometry

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

Detecting and recognizing centerlines as parabolic sections of the steerable filter response

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

A Growth Measuring Approach for Maize Based on Computer Vision

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

Multiple Views Geometry

Rectification and Disparity

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Planar homographies. Can we reconstruct another view from one image? vgg/projects/singleview/

Introduction.

3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Visualization 2D-to-3D Photo Rendering for 3D Displays

Camera model and multiple view geometry

Fingertips Tracking based on Gradient Vector

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

Real-time tracking of video sequences in a panoramic view for object-based video coding

Ray tracing based fast refraction method for an object seen through a cylindrical glass

Abstract. Introduction

Development of crack diagnosis and quantification algorithm based on the 2D images acquired by Unmanned Aerial Vehicle (UAV)

Particle Image Velocimetry Part - 1

Transcription:

6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of Illinois, Urbana-Champaign, United States Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography Jun-Hwa Lee 1, Soojin Cho 2, Sung-Han Sim 3 1 B.S. Student, School of Urban and Environment Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea. E-mail: hnsm19@unist.ac.kr 2 Research Assistant Professor, School of Urban and Environment Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea. E-mail: soojin@unist.ac.kr 3 Assistant Professor, School of Urban and Environment Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea. E-mail: ssim@unist.ac.kr ABSTRACT Displacement of a civil structure contains important information about structural behavior that helps confirm safety and serviceability. Displacement measurement by conventional devices such as LVDT is still challenging due to issues in the cost-effectiveness and accessibility to measurement locations. This paper presents a monocular vision-based displacement measurement system (VDMS) that is cost-effective and convenient installation. The proposed system is composed of a low-cost aftermarket camera, a specially-designed marker, and a laptop computer. The motion of the marker attached to the structure is captured by the video camera, that is subsequently converted to the time history of a dynamic displacement response using planar homography. Planar homography is a transformation matrix mapping any two planes in the three dimensional space, and it can map an image plane into the real marker s plane which may be at an angle with respect to the camera axis. The system was validated by a lab-scale test using a shaking table. Experimental results show proposed system has a good agreement with LDV regardless of camera positions. KEYWORDS: Displacement measurement, vision, camera, marker, planar homography 1. INTRODUCTION Structural safety is an important issue to enhance sustainability of civil environment. As displacement responses indicate structural behavior due to exposed dynamic loading, it has been widely used as an index for confirming serviceability and safety of the structures. For that reason, displacement should be measured satisfying several requirements, such as acceptable accuracy to be used as a safety indicator, inexpensive system cost, easy installation, and short measurement time. To satisfy these requirements, various type of displacement measurement system and devices have been developed. Traditional displacement measurement is conducted by several devices such as linear variable differential transformer (LVDT), laser Doppler vibrometer (LDV), and global positioning system (GPS). The LVDT measures displacement by linearity between cylindrical magnetic core s movement and induced electrical signal. Though LVDT provides acceptable accuracy in inexpensive cost, magnetic core should be contact with structural surface. Hence, additional structures such as jigs and scaffolds are required to be constructed around the measurement location to fix the transducer. These additional structures can generate significant level of noise due to their own movement. Moreover placement of the additional structure near the measurement location is often impossible due to existing water or traffic under the structure. The LDV measures displacement via the Doppler shift effect of a laser. The LDV provides exact displacement measurement up to picometer accuracy and easy installation process by pointing laser to measurement point. Disadvantage of the LDV is expensive cost of the device and no flexible device installation point since measurement is done only along the laser direction. The GPS measures displacement by obtaining the position of receiver from satellite ranging method. Reference point exists in the space, hence attaching GPS receiver at the measuring point is all the requirement for measuring displacement. However, GPS technology is not capable of mm resolution and its price is also very high. The Camera has been considered as an effective alternative to existing displacement measurement devices for

civil structures as cameras capability increases with lowering price. Various types of vision-based displacement measurement systems (VDMS) have been developed. Most of the VDMS use a marker whose geometry is known prior, is attached on the structure and movement of the marker is captured by camera [1][2][3][4]. Computer algorithm is then detect position of marker in the video and calculates structural displacement. Various type of VDMS has been developed manly depending on displacement measurement algorithm. In this study, a VDMS on the basis planar homography is proposed for real time planar displacement measurement. The proposed system requires a single camera that does not need to be calibrated, a planar marker with four white circles on the black background, and a laptop computer running a real-time algorithm that detects markers positions and calculates structural displacement using planar homography algorithm. The real marker s plane and marker images projected on the image sensor are mapped by planar homography that enables camera to be located arbitrarily. Lab-scale experiment on a shaking table is conducted to verify system s robustness about distance and angle between the camera and the marker. 2. MONOCULAR VISION-BASED DISPLACEMENT MEASUREMENT SYSTEM Proposed monocular vision-based displacement measurement system measures structural displacement by using single camera and image processing algorithm. This system is composed of hardware that takes video of marker attached on the structural surface and software that calculates structural displacement from the video using Otsu s thresholding method and planar homography mapping algorithm. Hardware and software will be described in detail in the following subsections. 2.1. Hardware Monocular VDMS used in this study consists of a marker, a camera, a frame grabber, and a laptop computer as shown in Figure 2.1. The marker is attached on the structure to be monitored and moves along with the structure. The marker is designed with the conversion information from pixel unit to metric unit. Since the planar homography is calculated using at least four points on two planes, the marker is designed with four large white circles against black background. The camera is stably located on any spot where the camera can look at marker, and takes a video (i.e., a series of images) of marker s movement. Frame grabber transmits the video to the computer frame by frame so that computer can manipulate the digital video data sequentially. The computer calculates metric displacement from the frame using software described in the following section. The hardware specification used in this study is also tabulated in Table 2.1. This hardware configuration is simple to install, since no preprocessing like camera calibration is required (The homoraphy matrix contains the information about the camera calibration). In addition, displacement can be calculated from 20m~30m far from structure using optical zoom. Further improvement is possible by using additional optical zoom. Camera Figure 2.1 Hardware configuration of VDMS Table 2.1 Hardware details Parts Model Features SCZ-2373n NTSC output interface (640 480 resolution, 29.97 fps), x37 optical zoom Frame Grabber myvision-usb DX NTSC/PAL input interface, USB 2.0 output interface Laptop LG A510 1.73 GHz Intel Core i7 CPU, 4GB DDR3 RAM

2.2. Software Measuring displacement starts from finding centroids of four white circles in the taken images, e.g. extraction of image coordinate. Real-time extraction of image coordinate is achieved by means of image binarization using Otsu s thresholding method, and taking mean value of white pixels in the binarized image. The four centroid points are used to convert the movement of marker in the images to one in real world. Metric displacement calculation is accomplished by mapping image coordinate system into the plane marker plane, e.g. world coordinate system, via planar homography mapping algorithm. Overall process is illustrated in Figure 2.2. 2.2.1. Extraction of image coordinates Figure 2.2 Overall software process Extraction of image coordinate is a process to extract image coordinate of white circles in the image. In the image, there are four white circles to be processed. To extract four image coordinates independently, four region of interest (ROI) are designated by user. Figure 2.3 shows an example of four user-assigned ROIs assigned around the circles on the image. Each ROI includes exactly one white circle on the black background. After ROIs are assigned, image coordinate is extracted for each ROI as Figure 2.4 shows. First image in Figure 2.4 is an example white circle in one ROI. Within the ROI, threshold that can optimally separate white circle from black background is found using Otsu s thresholding method [5]. This threshold is used to form binary image, as middle image of Figure 2.4, by assigning one for pixels that has larger value than the threshold, i.e., white pixels, and zero for the others, i.e., black pixels. Pixels that have value of one are considered as a part of a white circle, thus the average of those pixels coordinate is equivalent to centroid of the circle as shown in the last image of Figure 2.4. Repeating this process for the other ROIs enables the extraction of four image coordinates for each image. Overall process takes very short time (about 1/300 sec) due to its simple process, hence image coordinates can be extracted in real-time when a regular camera with 29.97 fps or larger is used. Figure 2.3 Example of ROI setting

2.2.2. Metric displacement calculation Figure 2.4 Extraction process of image coordinate in a ROI Metric displacement is calculated from image coordinates using planar homography mapping algorithm. Planar homography is a transformation relationship between two planes in the same projective space [6]. Marker plane, as Figure 5 shows, are projected onto image sensor, e,g, image plane. Thus marker plane and image plane are in the same projective space. Consequently, homography relationship between marker plane and image plane is established. Figure 2.5 Planar homography relationship between image plane and marker plane This planar homography relationship is written as, s i j = H w j (2.1) Where s is scaling factor, i j = [u j v j 1] T is image coordinate for j-th circle in the image plane, w j = [x j y j 1] T is world coordinate for j-th circle in the marker plane, and H is 3 by 3 planar homography matrix. Once H is computed, metric coordinates in the marker plane can be induced by inverse homography transform of image coordinates. To compute eight degree of freedom of H, eight-point algorithm[7] is used. This algorithm requires at least eight points, thus world coordinates of four circles in the marker plane and the corresponding image coordinates in the firstly taken image are used. From equation (2.1), Equation (2.2) can be rewritten as, i j H w j = 0 (2.2)

0 T w j T v j w j T h 1 T [ w j 0 T T u j w j ] [ h 2 ] = 0 (2.3) T T v j w j u j w j 0 T h 3 Where h kt is k-th row of matrix H. Equation (2.3) has rank of 2, thus for simplicity, [ 0T T T w j v j w h1 j T w j 0 T T u j w ] [ h 2 j h 3 ] = 0 (2.4) Equation (2.4) is a homogenous linear equation with the form of A j T h 9 1 = 0, where h 9 1 is rearranged matrix of H. Using four points, [A 1 A 2 A 3 A 4 ] T h = L T h = 0 (2.5) Thus h is a vector in null space satisfying L T h = 0. In real world, equation (2.1) is not perfectly established due to noise. In this case, h that minimizes the norm of L T h should be taken. After taking singular value decomposition for LL T, left singular vector that corresponds to minimum singular value is optimal h. Finally, since s in equation (2.1) has the same value of third entry in Hw j, j-th circle s world coordinate can be calculated by equation (2.6) w j = 1 h 3T w H 1 i j (2.6) No matter where camera is located, homography relationship is established if camera is looking at the marker. Thus, by using planar homography, user can locate camera at any optimal point regardless of angle between marker and camera. 3. LAB-SCALE EXPERIMENT To verify the performance of the proposed VDMS, a lab-scale dynamic experiment is conducted using a shaking table. As Figure 3.1 describes, a marker is fixed on top of the shaking table and camera is positioned by changing distance and angle from marker. Camera s optical zoom is set to make the image as large as possible. Real-time displacement measurement software is implemented using MATLAB 2015a. To compare experimental result, sub-µm resolution of LDV (RSV-150) is also installed to measure the reference displacement of the marker. In the experiment, shaking table is excited in harmonic way with 1Hz frequency and camera is positioned by changing distance from 4m to 20m in 4m interval and by changing angle from 0 to 60 in 15 interval as shown in Figure 3.2. Figure 3.1 Lab-scale dynamic test

Figure 3.2 Top view of the experiment Figure 3.3 is two experimental results in different angle and distance. Two results graph show high agreement of proposed system to LDV results, although two experiments are conducted with changing the camera location. Thus robustness of the angle and distance between camera and marker is obtained by means of using planar homography and optical zoom of the camera.

Figure 3.3 Two experimental result Figure 3.4 is standard deviation error of overall experimental results using Eq. (3.1) Error = std(x LDV x proposed ) std(x LDV ) (3.1) To compensate sampling rate difference between LDV and camera whose sampling rate are 6kHz and 29.97Hz, two time series data is synchronized by uniting of two time vector and interpolating displacement vector linearly for the missing points. The graph shows the error is not related to the change of angle and distance. In other words, the proposed system is robust to angle and distance between marker and camera. Figure 3.4 Overall standard deviation result

4. CONCLUDING REMARKS This study proposes a real-time monocular vision-based displacement measurement system offering unconstrained camera installation point. Fast extraction of exact image coordinate is facilitated by introducing Otsu s thresholding method. Unconstrained camera installation point is attributed to planar homography that maps two planes in the 3D space. To verify proposed system s capability of unconstrained camera positioning, lab-scale experiment is conducted using a shaking table. The displacement of the shaking table was measured by the proposed system, and it is compared to sub-µm resolution of LDV. The result can be summarized as: (1) Angle and distance between marker and camera doesn t have severe relation with error. Thus, the proposed system can be utilized to measure accurate displacement of civil structures in real-time with easy installation step. (2) Camera used in this study is commercial grade camera with capability of 640 480 resolution and 29.97 fps. By using high performance camera, enhanced displacement measurement can be achieved. (3) This system is to be verified from various field tests to show its practical performance. AKCNOWLEDGEMENT This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2012-R1A1A1-042867) REFERENCES 1. Ji, Y. F., & Chang, C. C. (2008). Nontarget stereo vision technique for spatiotemporal response measurement of line-like structures. Journal of engineering mechanics, 134:6, 466-474. 2. Lee, J. J., & Shinozuka, M. (2006). A vision-based system for remote sensing of bridge displacement. Ndt & E International, 39:5, 425-431. 3. Chang, C. C., & Xiao, X. H. (2009). Three-dimensional structural translation and rotation measurement using monocular videogrammetry. Journal of engineering mechanics, 136:7, 840-848. 4. Fukuda, Y., Feng, M. Q., Narita, Y., Kaneko, S., & Tanaka, T. (2013). Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm. Sensors Journal, IEEE, 13:12, 4725-4732. 5. Sezgin, M. (2004). Survey over image thresholding techniques and quantitative performance evaluation. Journal of Electronic imaging, 13:1, 146-168. 6. Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge university press 7. Agarwal, A., Jawahar, C. V., & Narayanan, P. J. (2005). A survey of planar homography estimation techniques. Centre for Visual Information Technology