Tracking Trajectories of Migrating Birds Around a Skyscraper

Size: px
Start display at page:

Download "Tracking Trajectories of Migrating Birds Around a Skyscraper"

Transcription

1 Tracking Trajectories of Migrating Birds Around a Skyscraper Brian Crombie Matt Zivney Project Advisors Dr. Huggins Dr. Stewart

2 Abstract In this project, the trajectories of birds are tracked around tall structures to allow wildlife biologists to study bird kills due to these structures. The system uses two video cameras, a high performance frame grabber, a boom system, and a computer to calculate and display the trajectories of the birds flying around these structures. There are two major software subsystems for this project, which are the preprocessor and the trajectory calculator. At the current stage of development, the system can track multiple moving objects with reasonable accuracy in a laboratory setting. 2

3 Table of Contents 1. Introduction System Description Modes of Operation Block Diagrams Software Flow Charts Design Equations Computer Simulations Software Analysis of Results Data Sheet Suggestions for Future Work Standards Patents References

4 1. Introduction The goal of this project is to develop a system to calculate, store, and display trajectories of birds flying in the field of view of the system, within 100 meters of a tower. Birds are identified and their position found through stereoscopic imaging. The system displays bird trajectories in three dimensions. 2. System Description As shown in Figure 1, the system consists of a pair of cameras used to capture two simultaneous views of a scene. The images captured by the cameras are processed to remove any static components of the images, leaving only the moving objects against a black background. The objects identified in each camera are correlated for each frame so that objects captured in the upper camera are correlated with objects captured in the lower camera. Then the objects are correlated between frames to obtain the trajectories of the objects. PC Camera Displays Photons Boom Camera 1 Image 1 Data Reduction Display Photons Camera 2 Image 2 Raw Data File Figure 1: System Level Block Diagram 3. Modes of Operation Setup Setup mode occurs when the system is initialized. It verifies that data can be retrieved from the cameras and allows system parameters to be adjusted by the user. Data Acquisition Data Acquisition mode obtains images of birds and then calculates and stores the trajectories of the birds. Data Display Data Display mode retrieves trajectory data from storage and displays the 3-D trajectories of the tracked birds. 4

5 4. Block Diagrams Camera Subsystem The two cameras shown in Figure 2 convert photons into usable images for the computer. The user will set the cameras some distance apart and parallel to each other. By using two cameras, objects can be located in three dimensions as discussed in section 6, Design Equations. Camera Image to PC Output Image Figure 2: Camera Subsystem Block Diagram The Image output is an analog video frame to a frame grabber interfaced with the computer. The cameras are synchronized to an external source using a technique such as line lock or genlock. Operation by Mode Setup In setup mode the cameras capture images to verify a connection with the PC. In addition the cameras are calibrated using a test pattern to determine the coefficients that characterize the radial distortion introduced by the camera lens. Data Acquisition In data acquisition mode the cameras capture images which are processed by the PC. Data Display The cameras are not used in data display mode. PC Subsystem The PC subsystem uses custom image processing software to find moving birds, identify their positions, and calculate their trajectories. 5

6 PC Camera Displays to User Image from Camera 1 Data Display to User Image from Camera 2 Raw Data File to Disk Figure 3: PC Subsystem Block Diagram Inputs Camera Images The Camera Image inputs are the primary inputs to the object identification, locating, and tracking algorithms. Outputs Camera Displays The Camera Displays show real-time images from the Camera 1 and Camera 2. Data Display The Data Display shows calculated locations and trajectories in 3D. Raw Data File The Raw Data File output stores all calculated values for later display. Operation by Mode Setup In setup mode the PC requests input from the user, and may turn the boom to a position determined from user input. The camera images may also be displayed so the user can verify that the cameras are operating correctly, and to verify that the cameras are focused on the desired volume of space. Data Acquisition In data acquisition mode the PC acquires images from both cameras and processes them to calculate trajectories. The moving objects in each image are correlated to moving objects in the other image. From the location of each object in their respective images the location of the object is determined in three dimensions. Finally, the moving objects are correlated with tracking information from past frames to determine the trajectory of each object. Data Display In data display mode the user is first prompted to specify the trajectories they would like to view from storage according to the date and time that the trajectories were calculated. This information is used to open the files 6

7 5. Software Flowcharts that contain the requested data. Next, the user is given the option to display the trajectories with respect to time, and if they choose this option, they must enter the start time and end time for the trajectories. After this the user is allowed to change the viewing angle of the 3D plot of the object trajectories before the trajectories are displayed with respect to time. If the user does not wish to view the trajectories with respect to time, the trajectories are all displayed at once, and after they are displayed the user can alter the viewing angle for the trajectories. The following are software flow charts for the three modes of operation for the project: Setup, Data Acquisition, and Data Display. Setup Mode Flow Chart Obtain system initialization parameters from user including camera position input Is user input valid? No Yes Go to Data Acquisition Mode Figure 4: Setup Mode Software Flowchart 7

8 Data Acquisition Mode Flow Chart Obtain Images from Cameras Add Images to Averages Preprocessing Subtract Averages from Images Low Pass Filter the Images Threshold the Images Correlate objects in images Correlation and Trajectory Calculation Calculate positions of objects Correlate objects from previous pair of images with current pair of images Update trajectories Compress Data Store data Figure 5 Data Acquisition Mode Software Flowchart 8

9 Data Display Mode Flow Chart Prompt user for day and time of trajectory data Read in and decompress data No View trajectories with respect to time? Yes Display trajectories? No Exit Display all 3-D trajectories at once Yes Prompt user for start and end time for trajectories Is there user input to change the 3-D point of view? Yes Is there user input to change the 3-D point of view? Yes Change 3-D viewing angle No Change 3-D viewing angle No Display 3-D trajectories with respect to time Figure 6 Data Display Mode Software Flowchart 9

10 6. Design Equations This section describes the equations that are used in the project. This includes equations to calculate the 3D location of an object given the location of the object in two cameras as well as equations used to correct lens distortion and remove high frequency noise from the captured images. The equations to calculate the 3D position of an object in Cartesian coordinates using locations of the object in two camera images are shown below. This technique is known as stereoscopic imaging. Figure 7 shows the setup of the cameras and the coordinate system to ensure the validity of these equations. The two cameras are placed so there is an upper and a lower camera, with the lower camera centered on the x-y axis. The x-axis is assumed to be vertical, the z-axis is horizontal and parallel to the line of sight from the center of the cameras, and the y-axis is horizontal and perpendicular to the line of sight from the center of the cameras. The positive z- axis is pointing toward the objects to be viewed. X = X X D * d X D U (Eq. 1) Y = Y X D D * d X U (Eq. 2) Z = d * f X D X U (Eq. 3) X Z +X U Camera U -X U +X D -X D Camera D Y -Y D +Y D Figure 7: Setup of system to ensure validity of design equations 10

11 In the above equations, d is the distance between the centers of the cameras, which is set at 12 cm, and f is the focal length of the cameras, which is 8 mm according to the data sheet for the cameras. X D is the distance in the x-axis between the object in the lower camera and the center of the lower camera. If the object in the lower camera is above the center of the camera the distance is positive while if the object is below the center of the camera the distance is negative. X U is the same as X D applied to the upper camera. The distance in the y-axis from the object in the camera to the center of the camera will be the same for both cameras, so Y D =Y U. If the object in the cameras is to the right of the center of the cameras (facing the back of the camera) the y-value is positive while if the object is to the left of the center the y-value is negative. The sign convention for the variables X D, X U, and Y D is shown in Figure 7 above. Equations 1 and 3 were found at the web site The equation to calculate Y was derived using Eq. 3 and the equation for a line in the y-z plane from the origin to an object in 3D coordinates. A camera lens exhibits significant radial distortion. According to page 6-4 of the Open Source Computer Vision Library Reference Manual ( ) the distortion is characterized by the following equations: x ˆ = x + x[ k1r + k2r ] + [ 2 p1xy + p2( r + 2x )] (Eq. 4) [ k r + k r ] + [ 2 p xy + p ( r y )] y ˆ = y + y + (Eq. 5) where: 2 2 r = x + y 2 xˆ, yˆ are the coordinates of x, y are the coordinates of the distorted point the corrected point and the parameters k 1, k2, p1, p2 characterize the center and magnitude of distortion. These coefficients are calculated through a calibration routine where a known pattern is viewed in several orientations. The captured images are filtered using a low pass Gaussian filter to remove small objects and high frequency noise. The Gaussian filter is used because, as equations 6 and 7 show, it is linearly separable, and so it can be implemented efficiently as a series of 1 dimensional convolutions rather than a 2 dimensional convolution (Digital Image Processing by Rafael Gonzalez and Richard Woods) ( u + v ) u v σ 2σ 2σ F 2π σ x 2π σ y H u, v = e = e e 2πσe 2π σe (Eq. 6) ( ) F G( u, v) = H ( u, v) F( u, v) g( x, y) = h( x, y) f ( x, y) = h( x) h( y) f ( x, y) (Eq. 7) 11

12 7. Computer Simulations The section describes a computer simulation that was performed to verify that the location of an object in two cameras can be used to calculate the 3D position of an object. The simulation graphically shows two camera images as can be seen in figure 8, with highlighted pixels in the images corresponding to the locations of a moving object in the image. The unit of measurement in the figure is in pixels. Locations of the object in the two images are used, along with equations 1 through 3 above, to calculate the 3D positions of the object as it moves along its path. From this simulation it was verified that the 3D position of an object can be calculated using the location of the object in two cameras. It was also found from this simulation that an object cannot be in the top half of the upper image and lower half of the bottom image. This results in a z value in the 3D position calculation that is negative, which is physically impossible. Thus, when correlating an object in the upper camera to an object in the lower camera as described below, the first test is to confirm that the object is not in the top half of the upper image and lower half of the bottom image z Camera x y Figure 8: MATLAB program output that shows the location of an object in 3D given its location in two camera images 8. Software 8.1 Image Preprocessing The image capture and preprocessing code is written in C++. The goal of the image preprocessing is to efficiently capture images and convert them to binary images where the moving objects in the scene are set to white, and the remainder of the scene is set to black. The image processing is done in a thread separate from the GUI to allow the user to interact with the program in parallel with the image processing. In addition to a class framework providing the user interface, the following objects are used. CImage Encapsulates image data and many of the OpenCV library functions. CImage also adds additional functionality to perform a variety of operations on the image. 12

13 matrix matrix is a template class which allows a one dimensional array of data to be accessed as a two dimension array. In addition to the array data, the matrix class also stores the height and width of the matrix. This is implemented as a matrix class so it can be used with any datatype, including 8 bit image data and 32 bit UFixedPoint data. RunningAverage The RunningAverage class encapsulates the data and functions required to keep a running average of every pixel in the images captured from the cameras. It uses UFixedPoint data (see below) to track the fractional components of the average without using floating point math. StereoViewApp StereoViewApp is the main application object. The ProcessFrame member function is invoked as a separate thread to get input from the cameras and apply each of the image processing functions. UFixedPoint UFixedPoint is a specialized fixed point class. The restrictions on this data type are that it must always be unsigned, and must always be less than 256. This datatype provides 8 bits of integer data and 24 bits of fractional data. This provides a method to do floating point math without resorting to slow floating point instructions. The 32-bit data type is efficient because it is the word size of the Pentium, and can be read directly from the cache or main memory without the additional shifting required for 8 and 16-bit datatypes. VideoSource VideoSource is an abstract base class used to define the interface to a video source. The class DTCameras is derived from VideoSource to access the DataTranslation capture card. Other classes may be derived from this class to allow processing of avi or other movie files, as well as interfaces to other cameras. This allows the program to be adapted to a number of situations including offline processing and the use of hardware not supported by the DataTranslation proprietary API. Once images have been processed to the binary image of moving objects they are saved to disk where Matlab can access them. 8.2 Trajectory Calculations and Display Tests were performed as the software to calculate and display trajectories was being written to ensure that the software produced the expected results. A test was performed to determine the accuracy of the 3D coordinates produced using equations 1 through 3 above for an object. A tennis ball was placed within the field of view of the 13

14 cameras, and its position relative to the cameras was measured, corresponding to the coordinate system shown in figure 7. Then, images were captured of the tennis ball and the program correl, described below, was run in MATLAB to calculate the 3D coordinates for the tennis ball. The error was found to be less than 5% for all coordinates, which is an acceptable error range. When performing the above test, position estimates were first obtained that were incorrect by more than a meter. The reason for this is that the boresights of the cameras were not perpendicular in the x-y plane. When the boresights were set so they were perpendicular in the x-y plane, the position estimates were accurate to within 5% of the actual object location. As shown in the Data Acquisition Mode Flow Chart in figure 5, the first correlation function correlates objects in two cameras for a given frame. When software to implement this function, called correl, was written, images of a tennis ball in a pendulum motion were passed to correl to produce the 3D positions of the tennis ball in a pendulum motion. Figure 9 below shows the plotted output of the first correlation function when the input images are of a tennis ball in a pendulum motion, and it can be seen that the plotted points are in the form of a pendulum. Figure 9: Output of first correlation function that shows the 3D locations of a tennis ball as it swings in a pendulum motion In the function correl, the 3D location of an object is calculated and displayed for each frame, but there is no correlation between an object in one frame and an object in the next frame. Thus, if more than one object is being tracked, the trajectories of the multiple objects cannot be distinguished by observing the output data. The separate trajectories can be seen by observing the output data graphically, but the output data is not separated according to different trajectories until the second correlation function is performed. The second correlation function correlates objects between time frames so that if multiple objects are tracked, their trajectories are distinguished from one another. After the software to implement this function, called correl2, was written, the function was used to calculate the trajectory for a single tennis ball in a pendulum motion. The output of this simulation is shown in figure 10, where it can be seen that since it is known that all the points are from the same trajectory, the points are connected in the plot. After the trajectory of one tennis ball was 14

15 calculated, images were captured of two tennis balls in a pendulum motion and their trajectories were calculated and displayed. The plot of the trajectories of two tennis balls is shown in figure 11, which shows that multiple objects can be tracked in simulation. Figure 10: Output of second correlation function that shows the trajectory of one tennis ball in a pendulum motion Figure 11: Output of second correlation function that shows the trajectories of two tennis balls in a pendulum motion Shown below are descriptions of the programs that are used to calculate and display the trajectories of objects given a set of images of the objects from two cameras. Main 15

16 The program main.m, written in MATLAB, reads in images that have been captured from the cameras and calls the first and second correlation functions for each pair of images read in, so the trajectories of objects shown in the images are calculated. The data containing the trajectory information of the objects as well as information regarding the time the data was taken is compressed and stored on the hard drive after all images received have been processed. Separate files store the x, y, and z coordinates, as well as the time information, and the files are named according to the time processing began on the images in MATLAB. As of now, there is no timestamp information passed to the MATLAB programs, so the timestamp information stored on the hard drive is relative to the frame numbers (i.e. the first frame processed is at time 1 and the second frame processed is at time 2). If timestamp information was passed to the MATLAB program, this would be used instead of the relative timing format, and it would also be used in naming the files which contain the trajectory information. Correl The two images read in by main.m are passed to the MATLAB function correl and the number of objects in the images along with the centroid and area of each object is calculated. An object is correlated in the two images if it is not in the top half of the upper camera and lower half of the bottom camera, as described above, and if its y value, Y D, is approximately the same for both objects. The y value should be the same given a margin of error because, as can be seen in figure 9, an object must be in the same location on the y-axis in both cameras. If there are multiple objects with the same y value, objects with the most similar area are correlated. After the first correlation is performed, the 3D position of each correlated objects is passed back to main.m Correl_2 The MATLAB function correl_2 correlates objects between time frames based on an expected distance traveled for the objects. The expected speed of the objects, which is a constant, is multiplied by the time between the last frame and the current frame, to get the expected distance traveled by the objects in one time frame. For each object correlated by the correl function, the distance traveled between time frames is calculated using the object s 3D coordinates, and this is compared to the expected distance. If the object s distance traveled is equal to the expected distance traveled given a margin of error, then the object has been correlated between frames. Since in simulation the objects being tracked are tennis balls in a pendulum swing, the speed of the objects can deviate significantly from the average. Thus, the error term used for correlating the objects between time frames is large. When birds are tracked, though, their speed will not deviate as much from the average, so the error term can be smaller. The information returned from correl_2 is the trajectories of all objects tracked. Disp_traj The MATLAB function disp_traj first prompts the user to enter the day and time that the trajectories they would like to view were processed in MATLAB. This information is used to load the compressed trajectory information and then decompress it. If the user wishes to view the trajectories with respect to time, the user is asked to enter the desired start and end time of trajectory information. Then, the trajectories are first displayed on the screen instantaneously, and the user is allowed to change the angle the trajectories are viewed from. After this the trajectories are displayed with respect to time. Thus, as time progresses, the location of the objects at every time interval can be seen. After the data has been displayed the user may change 16

17 the viewing angle of the trajectories again and then view the trajectories from this new angle. If the user does not wish to view the trajectories with respect to time, the trajectories are displayed instantaneously, and the user can change the angle the trajectories are viewed from until they end the program. 9. Analysis of Results Good results were obtained in the image preprocessing. The static background was successfully removed from the images, as was extraneous noise. The image preprocessing is possible in realtime (30 frames per second) on the current PC for images up to 320x240. This does not account for image correlation and other MATLAB code. The goal of the system was to process full resolution 640x480 images in real-time, but this is not practical, even with hardware dependent optimizations available on the Pentium 4. The best average frame rate obtained for full resolution images was 10 frames per second, where 1 frame consists of 2 images, one from each camera. While it may be possible to improve this frame rate slightly, it is unlikely that 30 frames per second is possible using the current hardware. The processing done with MATLAB code allows multiple objects to be tracked. It was proven in simulation that two objects can be tracked and their trajectories can be calculated with less than 5% error in their position at any moment in time. The graphical display of the trajectory data allows the trajectories to be observed easily so that conclusions can be made about these trajectories. The storage requirement for compressed trajectory data obtained from 50 frames of camera images is approximately 4 KB, so this insignificant amount of storage allows many trajectories to be stored. 10. Data Sheet Resolution in Space: Resolution in space is dependent on the distance of the object from the camera. The formula for calculating the resolution at a given distance is given below: X Resolution: 1.2*Z m/pixel Y Resolution: 1.03*Z m/pixel Z Resolution:.00619*d m/pixel-400d m/pixel (dependent on X) Frame Rate: 30fps for 320x240 10fps for 640x480 Total volume of space observed: 2867d 3 ( m 3) 17

18 11. Suggestions for Future Work Although the project was successful in tracking multiple objects, a number of enhancements are possible. Some of these improvements are: -Enhance graphical user interface to allow user friendly operation. -Integrate image processing and trajectory calculation code into one application. -Convert the MATLAB code to C++ to attempt tracking in real time. -Create camera calibration routine so cameras do not have to be centered in the x and y direction. -Generate confidence indicator for the trajectories calculated. -Add support for color cameras. -Add support for offline processing. 12. Standards There are no overarching standards that apply to bird tracking, but several standards are used to interface cameras to the PC. NTSC The cameras selected produce NTSC compatible signals, which is the standard in North America. The Frame Grabber converts NTSC inputs to digital images. DirectX DirectX is a defacto standard for the Microsoft Windows which includes a programming interface to video capture devices such as frame grabbers. DirectX was chosen over proprietary APIs to maintain a maximum amount of hardware independence. 13. Patents Patent # Title 6,366,691 Stereoscopic image processing apparatus and method 6,028,954 Method and apparatus for three-dimensional position measurement 6,035,067 Apparatus for tracking objects in video sequences and methods therefor 5,812,269 Triangulation-based 3-D imaging and processing method and system 14. References Pinhole camera model, image processing reference. 18

19 Equations relating focal length to zoom Stereoscopic imaging equations Light levels for various time of day and weather conditions. Estimating position when synchronized cameras are not available. Using line lock cameras. Equation relating focal length to target object size, distance, and CCD width. Measurements for various CCD sizes. 19

T-BIRD: Tracking Trajectories of Migrating Birds Around Tall Structures

T-BIRD: Tracking Trajectories of Migrating Birds Around Tall Structures T-BIRD: Tracking Trajectories of Migrating Birds Around Tall Structures Final Paper Study by: Arik W. Brooks Nicholas J. Patrick Project Advisors: Dr. Brian D. Huggins Dr. Donald R. Schertz Dr. Thomas

More information

System Block Diagram. Tracking Trajectories of Migrating Birds Around a Skyscraper. Brian Crombie Matt Zivney

System Block Diagram. Tracking Trajectories of Migrating Birds Around a Skyscraper. Brian Crombie Matt Zivney System Block Diagram Tracking Trajectories of Migrating Birds Around a Skyscraper Brian Crombie Matt Zivney Project Advisors Dr. Huggins Dr. Stewart Dr. Malinowski System Level Block Diagram The goal of

More information

Tracking Trajectories of Migrating Birds Around Tall Structures

Tracking Trajectories of Migrating Birds Around Tall Structures Tracking Trajectories of Migrating Birds Around Tall Structures Research Project Description By: Arik W. Brooks Nicholas J. Patrick Bradley University College of Engineering and Technology Electrical Engineering

More information

Tracking The Trajectories of Migrating Birds

Tracking The Trajectories of Migrating Birds Tracking The Trajectories of Migrating Birds Brian Crombie Matt Zivne 1. Summar 2. Patents 3. Standards 4. I/O Outline. Modes of Operation 6. Block Diagram 7. Parts List 1 Outline 8. Software Flowcharts

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Project Proposal By: Beach Wlaznik Advisors: Dr. Huggins Dr. Stewart December 7, 2006 I. Introduction The objective of the Autonomous Vehicle Navigation

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Functional Description and Complete System Block Diagram By: Adam Beach Nick Wlaznik Advisors: Dr. Huggins Dr. Stewart December 14, 2006 I. Introduction

More information

Counting Particles or Cells Using IMAQ Vision

Counting Particles or Cells Using IMAQ Vision Application Note 107 Counting Particles or Cells Using IMAQ Vision John Hanks Introduction To count objects, you use a common image processing technique called particle analysis, often referred to as blob

More information

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke* ADS40 Calibration & Verification Process Udo Tempelmann*, Ludger Hinsken**, Utz Recke* *Leica Geosystems GIS & Mapping GmbH, Switzerland **Ludger Hinsken, Author of ORIMA, Konstanz, Germany Keywords: ADS40,

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion What You Need To Know: x = x v = v v o ox = v + v ox ox + at 1 t + at + a x FIGURE 1 Linear Motion Equations The Physics So far in lab you ve dealt with an object moving horizontally

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Mech 296: Vision for Robotic Applications. Today s Summary

Mech 296: Vision for Robotic Applications. Today s Summary Mech 296: Vision for Robotic Applications Gravity Probe B http://einstein.stanford.edu/content/pict_gal/main_index.html Lecture 3: Visual Sensing 3.1 Today s Summary 1. Visual Signal: Position in Time

More information

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Introducing Robotics Vision System to a Manufacturing Robotics Course

Introducing Robotics Vision System to a Manufacturing Robotics Course Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Introduction.

Introduction. Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction TEMA Automotive is the world leading system

More information

A 100Hz Real-time Sensing System of Textured Range Images

A 100Hz Real-time Sensing System of Textured Range Images A 100Hz Real-time Sensing System of Textured Range Images Hidetoshi Ishiyama Course of Precision Engineering School of Science and Engineering Chuo University 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551,

More information

DTU M.SC. - COURSE EXAM Revised Edition

DTU M.SC. - COURSE EXAM Revised Edition Written test, 16 th of December 1999. Course name : 04250 - Digital Image Analysis Aids allowed : All usual aids Weighting : All questions are equally weighed. Name :...................................................

More information

Lecture 2 Image Processing and Filtering

Lecture 2 Image Processing and Filtering Lecture 2 Image Processing and Filtering UW CSE vision faculty What s on our plate today? Image formation Image sampling and quantization Image interpolation Domain transformations Affine image transformations

More information

Visual Physics Introductory Lab [Lab 0]

Visual Physics Introductory Lab [Lab 0] Your Introductory Lab will guide you through the steps necessary to utilize state-of-the-art technology to acquire and graph data of mechanics experiments. Throughout Visual Physics, you will be using

More information

Visual Physics Camera Parallax Lab 1

Visual Physics Camera Parallax Lab 1 In this experiment you will be learning how to locate the camera properly in order to identify and minimize the sources of error that are introduced by parallax and perspective. These sources of error

More information

Digital Image Correlation of Stereoscopic Images for Radial Metrology

Digital Image Correlation of Stereoscopic Images for Radial Metrology Digital Image Correlation of Stereoscopic Images for Radial Metrology John A. Gilbert Professor of Mechanical Engineering University of Alabama in Huntsville Huntsville, AL 35899 Donald R. Matthys Professor

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Automatic Partiicle Tracking Software USE ER MANUAL Update: May 2015

Automatic Partiicle Tracking Software USE ER MANUAL Update: May 2015 Automatic Particle Tracking Software USER MANUAL Update: May 2015 File Menu The micrograph below shows the panel displayed when a movie is opened, including a playback menu where most of the parameters

More information

UHD 185 FAQs. Optical System

UHD 185 FAQs. Optical System UHD 185 FAQs System Setup How is a typical setup of the UHD 185? Is a wireless control during flight possible? How do I trigger the image acquisition? What is the power consumption? What are the specs

More information

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi

And. Modal Analysis. Using. VIC-3D-HS, High Speed 3D Digital Image Correlation System. Indian Institute of Technology New Delhi Full Field Displacement And Strain Measurement And Modal Analysis Using VIC-3D-HS, High Speed 3D Digital Image Correlation System At Indian Institute of Technology New Delhi VIC-3D, 3D Digital Image Correlation

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

High Altitude Balloon Localization from Photographs

High Altitude Balloon Localization from Photographs High Altitude Balloon Localization from Photographs Paul Norman and Daniel Bowman Bovine Aerospace August 27, 2013 Introduction On December 24, 2011, we launched a high altitude balloon equipped with a

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions

More information

Subframe Video Post-Synchronization

Subframe Video Post-Synchronization Subframe Video Post-Synchronization 1. Testpattern for Measuring Offset Helmut Dersch November 24, 2016 Abstract This is a 3-part series of articles dealing with post-synchronization of video streams for

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

Image Processing: Final Exam November 10, :30 10:30

Image Processing: Final Exam November 10, :30 10:30 Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Image Transformations & Camera Calibration. Mašinska vizija, 2018. Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

VisionGauge OnLine Spec Sheet

VisionGauge OnLine Spec Sheet VisionGauge OnLine Spec Sheet VISIONx INC. www.visionxinc.com Powerful & Easy to Use Intuitive Interface VisionGauge OnLine is a powerful and easy-to-use machine vision software for automated in-process

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

To Measure a Constant Velocity. Enter.

To Measure a Constant Velocity. Enter. To Measure a Constant Velocity Apparatus calculator, black lead, calculator based ranger (cbr, shown), Physics application this text, the use of the program becomes second nature. At the Vernier Software

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing points & line http://www.wetcanvas.com/ http://pennpaint.blogspot.com/ http://www.joshuanava.biz/perspective/in-other-words-the-observer-simply-points-in-thesame-direction-as-the-lines-in-order-to-find-their-vanishing-point.html

More information

GOPRO CAMERAS MATRIX AND DEPTH MAP IN COMPUTER VISION

GOPRO CAMERAS MATRIX AND DEPTH MAP IN COMPUTER VISION Tutors : Mr. Yannick Berthoumieu Mrs. Mireille El Gheche GOPRO CAMERAS MATRIX AND DEPTH MAP IN COMPUTER VISION Delmi Elias Kangou Ngoma Joseph Le Goff Baptiste Naji Mohammed Hamza Maamri Kenza Randriamanga

More information

Using Edge Detection in Machine Vision Gauging Applications

Using Edge Detection in Machine Vision Gauging Applications Application Note 125 Using Edge Detection in Machine Vision Gauging Applications John Hanks Introduction This application note introduces common edge-detection software strategies for applications such

More information

ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE

ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE ON THE VELOCITY OF A WEIGHTED CYLINDER DOWN AN INCLINED PLANE Raghav Grover and Aneesh Agarwal RG (Grade 12 High School), AA (Grade 11 High School) Department of Physics, The Doon School, Dehradun. raghav.503.2019@doonschool.com,

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Implementation Of Harris Corner Matching Based On FPGA

Implementation Of Harris Corner Matching Based On FPGA 6th International Conference on Energy and Environmental Protection (ICEEP 2017) Implementation Of Harris Corner Matching Based On FPGA Xu Chengdaa, Bai Yunshanb Transportion Service Department, Bengbu

More information

Visual Physics - Introductory Lab Lab 0

Visual Physics - Introductory Lab Lab 0 Your Introductory Lab will guide you through the steps necessary to utilize state-of-the-art technology to acquire and graph data of mechanics experiments. Throughout Visual Physics, you will be using

More information

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked Plotting Menu: QCExpert Plotting Module graphs offers various tools for visualization of uni- and multivariate data. Settings and options in different types of graphs allow for modifications and customizations

More information

Multi-view Surface Inspection Using a Rotating Table

Multi-view Surface Inspection Using a Rotating Table https://doi.org/10.2352/issn.2470-1173.2018.09.iriacv-278 2018, Society for Imaging Science and Technology Multi-view Surface Inspection Using a Rotating Table Tomoya Kaichi, Shohei Mori, Hideo Saito,

More information

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens An open-source toolbox in Matlab for fully automatic calibration of close-range digital cameras based on images of chess-boards FAUCCAL (Fully Automatic Camera Calibration) Implemented by Valsamis Douskos

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

TABLE OF CONTENTS PRODUCT DESCRIPTION VISUALIZATION OPTIONS MEASUREMENT OPTIONS SINGLE MEASUREMENT / TIME SERIES BEAM STABILITY POINTING STABILITY

TABLE OF CONTENTS PRODUCT DESCRIPTION VISUALIZATION OPTIONS MEASUREMENT OPTIONS SINGLE MEASUREMENT / TIME SERIES BEAM STABILITY POINTING STABILITY TABLE OF CONTENTS PRODUCT DESCRIPTION VISUALIZATION OPTIONS MEASUREMENT OPTIONS SINGLE MEASUREMENT / TIME SERIES BEAM STABILITY POINTING STABILITY BEAM QUALITY M 2 BEAM WIDTH METHODS SHORT VERSION OVERVIEW

More information

KCS Motion. Video Motion Analysis Software

KCS Motion. Video Motion Analysis Software Video Motion Analysis Software Software and supporting material is property of G. Mason, Seattle University, 2007 Overview Overview KCS Motion tracks moving objects in a video clip and analyzes their position,

More information

PROJECTILE MOTION PURPOSE

PROJECTILE MOTION PURPOSE PURPOSE The purpose of this experiment is to study the motion of an object in two dimensions. The motion of the projectile is analyzed using Newton's laws of motion. During the motion of the projectile,

More information

Cross hairs for vertical and horizontal profiles. Profile can be linked to cross hair position, centroid, peak or sum profile can be displayed

Cross hairs for vertical and horizontal profiles. Profile can be linked to cross hair position, centroid, peak or sum profile can be displayed 3.1.2 Laser Beam Analysis Systems 3.1.2.2 BeamStar Greatest ease of use Automated operation Effortless report generation Start/stop measurement Background subtraction for improved accuracy measurements

More information

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information

Teleimmersion System. Contents. Dr. Gregorij Kurillo. n Introduction. n UCB. n Current problems. n Multi-stereo camera system

Teleimmersion System. Contents. Dr. Gregorij Kurillo. n Introduction. n UCB. n Current problems. n Multi-stereo camera system Teleimmersion System Dr. Gregorij Kurillo Contents n Introduction n Teleimmersion @ UCB n Current problems n Multi-stereo camera system n Networking n Rendering n Conclusion 1 3D scene 3D scene Internet

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK

DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK A.BANERJEE 1, K.BASU 2 and A.KONAR 3 COMPUTER VISION AND ROBOTICS LAB ELECTRONICS AND TELECOMMUNICATION ENGG JADAVPUR

More information

Kuper Virtual Axes Convention and Data Interchange

Kuper Virtual Axes Convention and Data Interchange Kuper Virtual Axes Convention and Data Interchange Properly calibrated, a Kuper is capable of producing extremely accurate moves in Cartesian space, via virtual (synthetic) axis synthesis. All modes, programming,

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Augmented reality with the ARToolKit FMA175 version 1.3 Supervisor Petter Strandmark By Olle Landin

Augmented reality with the ARToolKit FMA175 version 1.3 Supervisor Petter Strandmark By Olle Landin Augmented reality with the ARToolKit FMA75 version.3 Supervisor Petter Strandmark By Olle Landin Ic7ol3@student.lth.se Introduction Agumented Reality (AR) is the overlay of virtual computer graphics images

More information

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman   Assignment #1. (Due date: 10/23/2012) x P. = z Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

NAME :... Signature :... Desk no. :... Question Answer

NAME :... Signature :... Desk no. :... Question Answer Written test Tuesday 19th of December 2000. Aids allowed : All usual aids Weighting : All questions are equally weighted. NAME :................................................... Signature :...................................................

More information

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET

Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET After reading through the Introduction, Purpose and Principles sections of the lab manual (and skimming through the procedures), answer the following

More information

DEVELOPMENT OF A TRACKING AND GUIDANCE SYSTEM FOR A FIELD ROBOT

DEVELOPMENT OF A TRACKING AND GUIDANCE SYSTEM FOR A FIELD ROBOT DEVELOPMENT OF A TRACKING AND GUIDANCE SYSTEM FOR A FIELD ROBOT J.W. Hofstee 1, T.E. Grift 2, L.F. Tian 2 1 Wageningen University, Farm Technology Group, Bornsesteeg 59, 678 PD Wageningen, Netherlands

More information

Improving Positron Emission Tomography Imaging with Machine Learning David Fan-Chung Hsu CS 229 Fall

Improving Positron Emission Tomography Imaging with Machine Learning David Fan-Chung Hsu CS 229 Fall Improving Positron Emission Tomography Imaging with Machine Learning David Fan-Chung Hsu (fcdh@stanford.edu), CS 229 Fall 2014-15 1. Introduction and Motivation High- resolution Positron Emission Tomography

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Pre-Lab Excel Problem

Pre-Lab Excel Problem Pre-Lab Excel Problem Read and follow the instructions carefully! Below you are given a problem which you are to solve using Excel. If you have not used the Excel spreadsheet a limited tutorial is given

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

3. parallel: (b) and (c); perpendicular (a) and (b), (a) and (c)

3. parallel: (b) and (c); perpendicular (a) and (b), (a) and (c) SECTION 1.1 1. Plot the points (0, 4), ( 2, 3), (1.5, 1), and ( 3, 0.5) in the Cartesian plane. 2. Simplify the expression 13 7 2. 3. Use the 3 lines whose equations are given. Which are parallel? Which

More information

Section 7.6 Graphs of the Sine and Cosine Functions

Section 7.6 Graphs of the Sine and Cosine Functions Section 7.6 Graphs of the Sine and Cosine Functions We are going to learn how to graph the sine and cosine functions on the xy-plane. Just like with any other function, it is easy to do by plotting points.

More information

Two-Dimensional Projectile Motion

Two-Dimensional Projectile Motion Two-Dimensional Projectile Motion I. Introduction. This experiment involves the study of motion using a CCD video camera in which a sequence of video frames (a movie ) is recorded onto computer disk and

More information

COMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT WITH VIEW ANGLE IN DEPTH ESTIMATION WITH VARYING BRIGHTNESS

COMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT WITH VIEW ANGLE IN DEPTH ESTIMATION WITH VARYING BRIGHTNESS International Journal of Advanced Trends in Computer Science and Engineering, Vol.2, No.1, Pages : 171-177 (2013) COMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Omni-directional Multi-baseline Stereo without Similarity Measures

Omni-directional Multi-baseline Stereo without Similarity Measures Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

System Test Plan. Team: Kingpin. Project: Pin Deck Camera System. Team Members: Shawn Dobbins Jason Grey Eric Nelson Bhuwan Shrestha

System Test Plan. Team: Kingpin. Project: Pin Deck Camera System. Team Members: Shawn Dobbins Jason Grey Eric Nelson Bhuwan Shrestha Department of Computer Science and Engineering The University of Texas at Arlington Last Updated: October 1 st 2013 10:04am System Test Plan Team: Kingpin Project: Pin Deck Camera System Team Members:

More information

An Introduction to Images

An Introduction to Images An Introduction to Images CS6640/BIOENG6640/ECE6532 Ross Whitaker, Tolga Tasdizen SCI Institute, School of Computing, Electrical and Computer Engineering University of Utah 1 What Is An Digital Image?

More information

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt

ECE 470: Homework 5. Due Tuesday, October 27 in Seth Hutchinson. Luke A. Wendt ECE 47: Homework 5 Due Tuesday, October 7 in class @:3pm Seth Hutchinson Luke A Wendt ECE 47 : Homework 5 Consider a camera with focal length λ = Suppose the optical axis of the camera is aligned with

More information

Study on Gear Chamfering Method based on Vision Measurement

Study on Gear Chamfering Method based on Vision Measurement International Conference on Informatization in Education, Management and Business (IEMB 2015) Study on Gear Chamfering Method based on Vision Measurement Jun Sun College of Civil Engineering and Architecture,

More information

Calibration of IRS-1C PAN-camera

Calibration of IRS-1C PAN-camera Calibration of IRS-1C PAN-camera Karsten Jacobsen Institute for Photogrammetry and Engineering Surveys University of Hannover Germany Tel 0049 511 762 2485 Fax -2483 Email karsten@ipi.uni-hannover.de 1.

More information

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4 23 November 2001 Two-camera stations are located at the ends of a base, which are 191.46m long, measured horizontally. Photographs

More information

Computer Vision, Laboratory session 1

Computer Vision, Laboratory session 1 Centre for Mathematical Sciences, january 200 Computer Vision, Laboratory session Overview In this laboratory session you are going to use matlab to look at images, study the representations of points,

More information

Pin Hole Cameras & Warp Functions

Pin Hole Cameras & Warp Functions Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:

More information

IQR = number. summary: largest. = 2. Upper half: Q3 =

IQR = number. summary: largest. = 2. Upper half: Q3 = Step by step box plot Height in centimeters of players on the 003 Women s Worldd Cup soccer team. 157 1611 163 163 164 165 165 165 168 168 168 170 170 170 171 173 173 175 180 180 Determine the 5 number

More information

Stereo Vision Image Processing Strategy for Moving Object Detecting

Stereo Vision Image Processing Strategy for Moving Object Detecting Stereo Vision Image Processing Strategy for Moving Object Detecting SHIUH-JER HUANG, FU-REN YING Department of Mechanical Engineering National Taiwan University of Science and Technology No. 43, Keelung

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, 3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective

More information

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any) Address: Hong Kong Polytechnic University, Phase 8, Hung Hom, Kowloon, Hong Kong. Telephone: (852) 3400 8441 Email: cnerc.steel@polyu.edu.hk Website: https://www.polyu.edu.hk/cnerc-steel/ Project Title:

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information