Auto-focusing Technique in a Projector-Camera System

Similar documents
Accurate estimation of the boundaries of a structured light pattern

Flexible Calibration of a Portable Structured Light System through Surface Plane

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Projector Calibration for Pattern Projection Systems

Octree-Based Obstacle Representation and Registration for Real-Time

Pictures at an Exhibition

Construction of an Active Triangulation 3D Scanner for Testing a Line Following Strategy

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Robot localization method based on visual features and their geometric relationship

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Introducing Robotics Vision System to a Manufacturing Robotics Course

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Robot vision review. Martin Jagersand

Digital Image Processing COSC 6380/4393

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Locating 1-D Bar Codes in DCT-Domain

Study on the Signboard Region Detection in Natural Image

Fingerprint Mosaicking by Rolling with Sliding

Development of system and algorithm for evaluating defect level in architectural work

Measurements using three-dimensional product imaging

Perfectly Flat Histogram Equalization

A reversible data hiding based on adaptive prediction technique and histogram shifting

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

A Novel Double Triangulation 3D Camera Design

An Implementation of Spatial Algorithm to Estimate the Focus Map from a Single Image

Non-Linear Masking based Contrast Enhancement via Illumination Estimation

Optical Flow-Based Person Tracking by Multiple Cameras

Image Classification for JPEG Compression

Preprocessing of a Fingerprint Image Captured with a Mobile Camera

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Low Contrast Image Enhancement Using Adaptive Filter and DWT: A Literature Review

Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

Handy Rangefinder for Active Robot Vision

Surround Structured Lighting for Full Object Scanning

Project Report for EE7700

A Practical Camera Calibration System on Mobile Phones

CONTENT ADAPTIVE SCREEN IMAGE SCALING

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

A High Speed Face Measurement System

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Tracking Under Low-light Conditions Using Background Subtraction

Comparative Study of Linear and Non-linear Contrast Enhancement Techniques

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Hand-Eye Calibration from Image Derivatives

An Edge Based Adaptive Interpolation Algorithm for Image Scaling

Perceptual Quality Improvement of Stereoscopic Images

Planar pattern for automatic camera calibration

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS

A Fast Circular Edge Detector for the Iris Region Segmentation

Depth Range Accuracy for Plenoptic Cameras

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

A FUZZY LOGIC BASED METHOD FOR EDGE DETECTION

Object Tracking using Superpixel Confidence Map in Centroid Shifting Method

Character Segmentation and Recognition Algorithm of Text Region in Steel Images

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Feature Detectors - Sobel Edge Detector

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Blur Space Iterative De-blurring

Integrating 3D Vision Measurements into Industrial Robot Applications

COMPUTER VISION. Dr. Sukhendu Das Deptt. of Computer Science and Engg., IIT Madras, Chennai

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

[2006] IEEE. Reprinted, with permission, from [Wenjing Jia, Huaifeng Zhang, Xiangjian He, and Qiang Wu, A Comparison on Histogram Based Image

Project Title: Welding Machine Monitoring System Phase II. Name of PI: Prof. Kenneth K.M. LAM (EIE) Progress / Achievement: (with photos, if any)

Lecture Image Enhancement and Spatial Filtering

I. INTRODUCTION. Figure-1 Basic block of text analysis

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

Short Survey on Static Hand Gesture Recognition

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

A Robust Two Feature Points Based Depth Estimation Method 1)

ELEC Dr Reji Mathew Electrical Engineering UNSW

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Computer Graphics and Image Processing Introduction

Understanding Tracking and StroMotion of Soccer Ball

Laser stripe peak detector for 3D scanners. A FIR filter approach

Camera Model and Calibration. Lecture-12

Video Compression Method for On-Board Systems of Construction Robots

A 100Hz Real-time Sensing System of Textured Range Images

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Looming Motion Segmentation in Vehicle Tracking System using Wavelet Transforms

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

The Study of Genetic Algorithm-based Task Scheduling for Cloud Computing

IJSER. Abstract : Image binarization is the process of separation of image pixel values as background and as a foreground. We

Low Cost Motion Capture

Human Motion Detection and Tracking for Video Surveillance

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Quaternion-based color difference measure for removing impulse noise in color images

Transcription:

2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School of Information and Communication Engineering Intelligent Systems Research Center, Sungkyunkwan University Suwon, Korea E-Mail: {quanglam, daesik80, lsh}@ece.skku.ac.kr Abstract Projector-camera system has been researched for years and it has become more and more popular in 3D vision and many other applications. However, with pre-determined position of projector s lens, projector based researches were not able to deal with the scenes that have objects with different distances from the projector. In this paper, we present a projector autofocusing technique based on local blur information of the image that can overcome above limitation. The algorithm is implemented on projector-camera system, in order to focus the pattern which is projected by projector on all objects in the scene sequentially. The proposed algorithm first obtains a blur-map of the scene on the image by using a robust local blur estimator, and then the region of interest is decided by thresholding the obtained blur-map. Since the main light source is provided by projector, the proposed auto-focusing algorithm achieves a good performance with different light conditions. Keywords auto-focusing, projector-camera, blur estimation, blur-map. I. INTRODUCTION Recently, a projector-camera system, which uses one or multiple projectors and cameras, is commonly used in many research applications such as structured light based 3D reconstruction [1][2], self-calibration [3][4], ubiquitous display system and AR (augmented reality) [5][6] etc. The crucial problem of the projector related applications is that the focusing range (depth-of-field) of the projector is too short. In that case, focal blur [7] (or out-of-focus blur) of the projector s pattern on the objects in the scene occurs when these objects are placed outside depth-of-field of the projector. Even though several camera auto-focusing techniques are proposed [8][9], relatively few methods about projector auto-focusing are introduced. In the case of a commercially available autofocusing projector, it utilises the infra-red sensor to measure the distance between the scene and the projector. This method assumes that the scene is relatively flat and perpendicular to the projector like a wall. The limitation of this method is that it can only focus on some specific area the infra-red sensor measures and it assumes that the distance between the scene and the projector is almost constant. However, in many applications where projector-camera systems are used, the scene is not flat and/or not perpendicular to the projector. Consequently, it requires additional equipments such as a camera to focus the projector on the specific area since a projector cannot see the scene. Figure 1. The ideal step edge f(x), the blurred edge b(x) and its two reblurred versions b a (x), b b (x). In this paper, we present a projector auto-focusing technique with low complexity in computation aiming at realtime implementation. The proposed method adopts a simple blur estimation algorithm, as proposed in Hu s work [10]. The blur estimator uses a Gaussian isotropic PSF model and the difference between digitally re-blurred versions of a captured image is used to estimate the blur radius without edge detection. The major contributions of the paper are: We have proposed and implemented a control algorithm for projector s lens. We have then implemented the auto select region of interest 1 (ROI), and finally We implement focusing the pattern of projector on the ROI. The remainder of the paper is organised as follows: In Section II, we review the blur estimation algorithm [10]. Our auto-focusing algorithm is described in Section III. A simple projector s lens control algorithm is proposed in Section IV. In Section V we show the setup of projector-camera system. Experimental results are provided in Section VI and, finally, Section VII concludes the paper. II. LOCAL BLUR ESTIMATION We analyse the edge in one dimension (1D). The edge f(x) is modelled as a step function with amplitude A and offset B as 1 Here, the region of interest is the object that we want to focus on. 978-1-4244-2287-6/08/$25.00 c 2008 IEEE 1914 ICARCV 2008

shown in Figure 1, Figure 2. Difference ratio among the edge. A B, x 0 f( x), x (1) B, x 0 The focal blur kernel is modelled by the normalized Gaussian function: n 2 1 2 2 gn (, ) e, n (2) 2 Where is the unknown blur radius that we want to estimate, and gn (, ) is normalised function. Then blurred edge is the result of the convolution between step edge and focal blur kernel: bx ( ) f( x ngn ) (, ) n A x (1 gn (, )) B, x 0 2 n x, x A x 1 (1 gn (, )) B, x 0 2 n x 1 Re-blurring the blurred edge using Gaussian focal blur kernels with blur radius a and b ( a < b ) we have two reblurred versions b a (x) and b b (x) as mentioned below: A x (1 gn (, a 2 n x ba ( x), (4) A x 1 (1 gn (, a 2 n x 1 A x (1 gn (, b 2 n x bb ( x) (5) A x 1 (1 gn (, b 2 n x 1 (3) Figure 3. Stripes pattern 1024x768 size. with x. Take the ratio r(x) of the differences between the original blurred edge and its two re-blurred versions to eliminate the dependence of blur estimation on the offset and amplitude. bx ( ) ba ( x) rx ( ) (6) ba( x) bb( x) As indicated in [10], the peaks of ratio are at the edge position x 1 and x 0 as shown in Figure 2, and the blur radius can be approximately estimated by the equation following: a b (7) ( b a) rx ( ) max b with b a. Equation 7 shows that the estimated blur radius is independent of the edge offset B and amplitude A. III. PROJECTOR AUTO-FOCUSING ALGORITHM A. Obtaining the ROI We assume that the focal length of the camera is short enough, so all observed objects are located in depth-of-field of the camera, and the blur of stripes on objects is Gaussian. Projector projects a pattern with horizontal stripes as shown in Figure 3. All the stripes have the same width which is decided experimentally. In our experiments, the width equals 50 pixels. Since the depth-of-field of the projector is too short, and the objects that locate in front of projector with different distances, the stripes on them will be blurred with different blur radii. By estimating the blur radii of stripes on all objects using the method described in Section II, we obtain the blur-map of the scene. We assume the distance between objects is long enough, so that the estimated blur radii of stripes on this objects are different from on other objects. The histogram of the blur-map depicts the order of objects that sorted by estimated blur radius, there are n maxima corresponding to n objects. To obtain the ROI, we first segment the blur-map by thresholding technique [11] using thresholds which are computed by searching 1915

algorithm described in Figure 4, then applying morphological operations: dilation and erosion [12] to remove noise elements. After getting the area of each object by segmentation, we compute the mean value of blur on each object, denoted as ( obj i ). And finally, set the object has minimum as ROI. To find the threshold between two maxima, we employ a searching algorithm as follows: 1. Smoothing and dilating by morphology dilation the histogram to eliminate local peaks. Local maxima should not change. 2. Start from the leftmost maximum, traverse the histogram computing the slope in a window of a given size, until the slope becomes larger than a certain threshold P of positive slope. 3. When the slope becomes higher than the threshold P, the final threshold T is the mean position of the window at that instant. Figure 4. Threshold searching algorithm. B. Projector Auto-focusing Algorithm The proposed algorithm is stated as follows: 1. Projector projects a stripes pattern. 2. Capture the image I, compute the blur-map of image I. 3. Decide the ROI as described in Section A above. Let ( 0 is the mean value of blur on ROI, and choose an arbitrary direction for projector s lens rotation. 4. Control the motor to rotate the projector's lens for one step. 5. Capture the image I, compute k ( ROI) (k be an integer representing the k-th iteration, k>0) in I. 6. If 1( ROI) 0( ROI), reverse the direction of rotation and go to step 4. 7. If k( ROI) k 1( ROI ), with k 2, (that means the estimated blur radius passed the minimum) Rotate projector s lens one step back. Else Go to step 4. 8. Remove this object from the blur-map. Go to step 3 until all objects are focused in turn. Figure 6. Projector s lens control algorithm. Figure 5. Projector auto-focusing algorithm. IV. PROJECTOR S LENS CONTROL ALGORITHM The movement of projector s lens is controlled by a DC motor. For each step, the lens rotates 1.4 degree. The control circuit controls the speed of DC motor by pulse-width modulation (PWM) method [13]. Moreover, the accuracy of the system almost depends on the accuracy of lens movement. The lens rotating must be synchcronised with the algorithm on the computer. The flowchart of simple control algorithm is illustrated in Figure 6. Figure 7. Diagram of projector-camera system (top), implemented system (bottom left) and gear module (bottom right). 1916

V. SETUP OF THE PROJECTOR-CAMERA SYSTEM The projector-camera system performs the light projection, projector s lens rotation and image acquisition. It consists of three main parts: a camera, a projector and a control part. Figure 7 shows the schematics of projector-camera system and the actual implemented prototype. A Flea CCD camera with 1024x768 resolution and Optima EP 729 model projector are used in our research, we set projector resolution at 1024x768. A 90-teeth-gear is mounted on the projector s lens in order to transfer the rotation of DC motor to the movement of the lens. The DC motor has a 134- teeth-cogwheel on the head and an optical encoder with 256 optical slots. The cogwheel is connected with the gear on the projector s lens and the gear ratio is 1:1.489. Control circuit controls the speed of motor by PWM method, gets the feedback signal from encoder and communicates with computer by USB interface. VI. EXPERIMENTAL RESULTS In this section we discuss the experimental results of our algorithm. We have performed the experiment with two different objects: a statue and a board with different distances from the projector. In the blur estimation procedure, the blur radii for the re-blurring kernels are decided experimentally, and in our experiment, a 7 and b 27. The estimated blur radius is computed by using Equation 7. Figure 8 shows the image with initial position of projector s lens. After applying the blur estimation algorithm, we obtain a blur-map as shown in Figure 9, where the lighter area indicates a larger blur radius, while the darker area indicates a smaller blur radius. The histogram of the blur-map is illustrated in Figure 10, the threshold for thresholding the blur-map is decided by applying the searching algorithm which is described in Figure 4, as shown in Figure 11. After thresholding the blurmap based on histogram, the statue is decided as a ROI, which is illustrated in Figure 12. The auto-focusing algorithm first focuses the stripes patterns on the statue, as shown in Figure 13. And then, when the board is decided as a ROI, Figure 14, the algorithm will change to focus on the board, Figure 15. Figure 16 illustrates the evaluation of the estimated blur radius while the algorithm is running. When the estimated blur radius reaches the minimum that means the stripes pattern is well focused on the current object. VII. CONCLUSION In this paper, we have presented a projector auto-focusing algorithm for projector-camera system which is widely applied in 3D vision and many other such applications. The proposed algorithm is based on a robust blur estimator and histogram process. All the objects in the scene are focused in turn. However, when the stripes are highly blurred, due to the limitation of local blur estimator, the system cannot work correctly. This drawback opens a new issue for our research in near future. ACKNOWLEDGMENT This research was performed for the Intelligent Robotics Development Program, one of the 21 st Century Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy of Korea. This work was supported by the Korea Science and Engineering Foundation (KOSEF) grant funded by the Korea government (MOST) (No. R01-2006-000-11297-0). The work was also supported by the Science and Technology Program of Gyeonggi province as well as in part by the Sungkyunkwan University. Figure 8. Initial position of projector s lens. Figure 9. Blur-map of the scene. Figure 10. Histogram of blur-map (with zero value removal - which is noise). 1917

Decided threshold ` Figure 11. Smoothed and dilated histogram, and decided threshold. Figure 14. Segmented blur-map, the board s area is obtained (white colour is 1, black colour is 0). Figure 12. Segmented blur-map, the statue s area is obtained (white colour is 1, black colour is 0). Figure 15. The stripes are focused on the board. Figure 13. The stripes are focused on the statue. Figure 16. Evaluation of estimated blur radius in focusing process. REFERENCES [1] F. Blais, Review of 20 Years of Range Sensor Development, Journal of Electronic Imaging, vol. 13, no. 1, pp. 231-240, 2004. 1918

[2] J. Salvi, J. Pagès and J. Batlle, "Pattern codification strategies in structured light systems", Pattern Recognition, vol. 37, no. 4, pp. 827-849, 2004. [3] D. Fofi, J. Salvi, and E. Mouaddib, Uncalibrated reconstruction : an adaptation to structured light vision, Pattern Recognition, vol. 36, no. 7, pp. 1631-1644, 2003. [4] Y.F. Li and S.Y. Chen, "Automatic recalibration of an active structured light vision system," IEEE Transactions on Robotics and Automation, vol. 19, no. 2, pp. 259-268, 2003. [5] D. Molyneaux, G. Kortuem, Ubiquitous displays in dynamic environments: Issues and Opportunities, Workshop on Ubiquitous Display Environments, International Conference on Ubiquitous Computing (Ubicomp), Nottingham 2004. [6] C. Pinhanez, "The Everywhere Displays Projector: Device to Create Ubiquitous Graphical Interfaces," Lecture Notes in Computer Science, vol. 2201, pp. 315-331, 2001. [7] J.H. Elder, S.W. Zucker, Local Scale Control for Edge Detection and Blur Estimation, IEEE Trans. Pattern Analysis and Machine Intelligence 20, pp 699-716, 1998. [8] E. Wong, A New Method for Creating a Depth Map for Camera Auto Focus Using an All in Focus Picture and 2D Scale Space Matching, IEEE International Conference on Acoustics, Speech and Signal Processing, Toulouse 2006, vol. 3, pp. III-1184-1187. [9] J. He, R. Zhou, and Z. Hong, Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera, IEEE Transactions on Consumer Electronics, vol. 49, no. 2, pp. 257-262, 2003. [10] H. Hu, G.d. Haan, Low Cost Robust Blur Estimator, Proceedings of IEEE Int. Conf. on Image Processing, Atlanta 2006, pp 617-620. [11] R.C. Gonzalez and R.E. Woods, Thresholding, in Digital Image Processing, Second edition, Prentice Hall, New Jersey, 2002, pp 595-612, ISBN 0-201-18075-8. [12] R.C. Gonzalez and R.E. Woods, Dilation and Erosion, in Digital Image Processing, Second edition, Prentice Hall, New Jersey, 2002, pp 523-527, ISBN 0-201-18075-8. [13] P. Bowler, Pulse width modulation for induction motor drives, Proceedings of First IEEE Int. Conf. on Devices, Circuits and Systems, Caracas 1995, pp 103-112. 1919