Tracking Under Low-light Conditions Using Background Subtraction

Similar documents
Ellipse Centroid Targeting in 3D Using Machine Vision Calibration and Triangulation (Inspired by NIST Pixel Probe)

Vision Review: Image Formation. Course web page:

CS4758: Rovio Augmented Vision Mapping Project

Projector Calibration for Pattern Projection Systems

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Pin Hole Cameras & Warp Functions

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

Geometric camera models and calibration

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING

Pin Hole Cameras & Warp Functions

Hand-Eye Calibration from Image Derivatives

3D graphics, raster and colors CS312 Fall 2010

Camera Model and Calibration

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

A Two-Stage Template Approach to Person Detection in Thermal Imagery

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Miniaturized Camera Systems for Microfactories

ENG 7854 / 9804 Industrial Machine Vision. Midterm Exam March 1, 2010.

Fully Automatic Endoscope Calibration for Intraoperative Use

Project report Augmented reality with ARToolKit

Project Report for EE7700

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

LUMS Mine Detector Project

Gabriel Taubin. Desktop 3D Photography

Vision Based Parking Space Classification

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

lecture 10 - depth from blur, binocular stereo

Draw Guide. Chapter 7 Working with 3D Objects

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

Feature Detectors - Sobel Edge Detector

Introducing Robotics Vision System to a Manufacturing Robotics Course

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

A Practical Camera Calibration System on Mobile Phones

Detection of Moving Objects in Colour based and Graph s axis Change method

Introduction to Homogeneous coordinates

People detection and tracking using stereo vision and color

Teleimmersion System. Contents. Dr. Gregorij Kurillo. n Introduction. n UCB. n Current problems. n Multi-stereo camera system

Chapter 3 Image Registration. Chapter 3 Image Registration

Development of Vision System on Humanoid Robot HRP-2

A Vision System for Automatic State Determination of Grid Based Board Games

A Statistical Consistency Check for the Space Carving Algorithm.

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Camera Self-calibration Based on the Vanishing Points*

EECS 487: Interactive Computer Graphics

Development of a Vision-Based Measurement System. for Relative Motion Compensation. Johan Lindal Haug. Supervisors Geir Hovland Morten Ottestad

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

1 Projective Geometry

Image-Based Reconstruction for View-Independent Human Motion Recognition

Change detection using joint intensity histogram

Kalman Filtered Robot Positioning System Using a Calibrated Camera

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Flat-Field Mega-Pixel Lens Series

Effects Of Shadow On Canny Edge Detection through a camera

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

Digital Image Processing

Computer and Machine Vision

Stereo Image Rectification for Simple Panoramic Image Generation

Auto-focusing Technique in a Projector-Camera System

CameramanVis: where the camera should look? CPSC 547 Project proposal

NAME :... Signature :... Desk no. :... Question Answer

Gate-to-gate automated video tracking and location

Coarse-to-fine image registration

Worksheet Answer Key: Scanning and Mapping Projects > Mine Mapping > Investigation 2

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

Camera Model and Calibration. Lecture-12

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Topics and things to know about them:

Filtering Images. Contents

EDGE BASED REGION GROWING

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University

Performance Evaluation of Monitoring System Using IP Camera Networks

Light: Geometric Optics

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots

Image-Based Reconstruction for View-Independent Human Motion Recognition. Technical Report

CS4733 Class Notes, Computer Vision

POME A mobile camera system for accurate indoor pose

A Calibration-and-Error Correction Method for Improved Texel (Fused Ladar/Digital Camera) Images

Pattern Feature Detection for Camera Calibration Using Circular Sample

Estimating Speed, Velocity, Acceleration and Angle Using Image Addition Method

Thermal / Visible Autonomous Stereo Visio System Calibration Methodology for Non-controlled Environments

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Sobel Edge Detection Algorithm

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

High Altitude Balloon Localization from Photographs

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

A High Speed Face Measurement System

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Transcription:

Tracking Under Low-light Conditions Using Background Subtraction Matthew Bennink Clemson University Clemson, South Carolina Abstract A low-light tracking system was developed using background subtraction. Results are given for a full uniform light source, single light source and no light source. The results show that although the system performs well with full uniform light, it is easily confused by shadows given a single light source, and is almost useless with no light, even though low-light cameras are being used. The results will be discussed as well as the methods used to obtain the results. Some possible solutions will be presented in order to improve the tracking system for low-light conditions. 1 Introduction Tracking has a variety of applications. For example, a security team may want to track suspicious persons or a manufacturing plant may want to follow a product through the assembly process. One common method used with video tracking is background subtraction. Abbott and Williams used background subtraction with connected components analysis to segment video [1], Davis and Sharma used background subtraction with thermal cameras for tracking purposes [2], while Hoover used it with regular video feeds, also for tracking purposes [3]. In this paper, we will follow an algorithm very similar to Hoover s algorithm, but we will use low-light cameras in place of regular cameras. In doing so, we hope to track objects with little or no light present. These cameras contain a small amount of LEDs around the lens to provide ambient light without producing any light visible to the naked eye. 2 Methods Before any code is written, it is necessary to setup the tracking area and the cameras. In our case, we used masking tape to sketch out a rectangle approximately 4m long by 3m wide. The cameras were positioned above the tracking area and facing the center. Once the inital setup is complete, the cameras are calibrated. Using the calibration matrices and background subtraction, pixels are highlighted where the tracking system believes an object exists. We provide a discussion of camera calibration and background subtraction, then follow with the algorithm. 1

2.1 Camera Calibration Camera calibration is necessary to map image coordinates to real world coordinates. A brief overview of the calibration matrices will be presented and then a discussion of the image calibration tool we used will be given. Calibration requires two sets of information, the intrinsic values specific to the camera and the extrinsic values dependent on the world geometry. The intrinsic values include the focal length of the camera, the aspect ratio, the principal point, and skew. Rotation and translation make up the extrinsic values. The reduced equation mapping world coordinates to image coordinates is given below, where f is the focal length and (u 0,v 0 ) is the principal point. We assume that the skew is zero and the aspect ratio is 1:1. x y 1 = f 0 u 0 0 f v 0 0 0 1 R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 X Y Z + Camera calibration is not trivial, but several tools are available to calibrate cameras. We chose to use a 3rd party Matlab toolbox [4]. The toolbox, developed by Jean-Yves Bouguet, computes both the intrinsic values and extrinsic values of the camera. Calibration requires a calibration image, usually a black and white chessboard of sorts. We constructed a 3 x 3 chessboard using black foam board and white 11" x 8.5" printer paper. Images were captured with the board tilted at various angles. The calibration software uses these images to produce the intrinsic values. The board was then placed in the origin of our tracking area. The origin may be placed anywhere, but for ease of computation, we chose a corner, allowing for only positive world coordinates. A single image was captured, and this image was used to determine the extrinsic values, rotation and translation. Since rotation and translation are extrinsic, it is required to periodically update these matrices since they will change as the room is used. For example, the camera may be shifted slightly on accident. 2.2 Background Subtraction Background subtraction, on the other hand, is extremely trivial. With grey-level values, a difference map is produced by computing the absolute difference between each pixel in the image. This difference image is then thresholded to remove any difference values below some fixed threshold. Background subtraction is only effective when the foreground objects differ from the background. For example, black objects tracked on a black surface will not show up in a difference image because their grey level values are too similar. To produce good results, it is recommended that the images be pre-processed beforehand to remove noise and increase the breadth of the grey-level intensities. 2.3 Algorithm Now that we ve provided some background, here is the algorithm used. First, we calibrate the cameras. Second, we create a mapping of image coordinates to world coordinates. This speeds up the computation tremendously. Background images are stored, and mask images are produced in order to track only what is in the tracking area. Now, we loop over time. We set the occupancy map pixels to 1 indicating that none of the floor can be seen. Then, for each camera, a difference image is computed between the current image and the background image. If the difference is less T x T y T z 2

than some threshold, the floor can be seen in this area, and a 0 is placed in the occupancy map. After looping through all the cameras, the occupancy map is displayed. A value of 0 indicates at least one camera can see the floor. A value of 1 indicates that no camera can see the floor. Pseudocode Calibrate the cameras Create a lookup table of image coordinates to world coordinates Capture background images of empty tracking area Create mask image (1 is trackable, 0 is untrackable) Loop over time Set Occupancy Map to 1 for all pixels For each camera Compute difference of current image with background image If the difference is within desired threshold Set Occupancy Map to 0 at that location 3 Experimental Results The results varied over each of the test cases. With full uniform light, we acheived fairly good results. With a single light source, the results were not very good. The shadow of the object created just as much intensity change as the object itself. With no light, the noise from the cameras combined with a reduced overall intensity made it near impossible to track any object. It was impossible to distinguish between the system noise and an object within the tracking area. Images are given below to show how well an object was tracked. 3

3.1 Full Uniform Light Source Background Images Tracking Images Occupancy Map 4

3.2 Single Light Source Background Images Tracking Images Occupancy Map 5

3.3 No Light Source Background Images Tracking Images Occupancy Map 6

4 Conclusion As our results show, background subtraction is not a viable option for tracking under low-light conditions, even using low-light cameras. However, simply changing sensors could dramatically improve results. For example, using LADAR, one could use a very similar algorithm to track a person within the room. Also, infrared cameras produce a much better intensity histogram. It is also possible that some image processing could be done before the images are subtracted, such as smoothing and filtering. There are still many possibilities for tracking in the dark. However, we can now confidently say that background subtraction with low-light cameras is not the optimal solution. References [1] R. G. Abbott and L. R. Williams. Multiple target tracking with lazy background subtraction and connected components analysis. 3rd Computer Science Univ. New Mexico Student Conference 2007. [2] J. W. Davis and V. Sharma. Background-subtraction in thermal imagery using contour saliency. International Journal of Computer Vision, pages 161-181, 2007. [3] A. Hoover and B. D. Olsen. Real-time occupancy map from multiple video streams. Proc. IEEE Intl. Conf. Robotics and Automation, pages 2261-2266, 1999. [4] J. Bouget. Camera Calibration Toolbox for Matlab. http://www.vision.caltech.edu/ bouguetj/calib_doc/, accessed 10 December 2007. 7