Multichannel Camera Calibration

Similar documents
Camera Calibration with a Simulated Three Dimensional Calibration Object

Flexible Calibration of a Portable Structured Light System through Surface Plane

Multiple View Reconstruction of Calibrated Images using Singular Value Decomposition

Vision Review: Image Formation. Course web page:

CS201 Computer Vision Camera Geometry

Outline. ETN-FPI Training School on Plenoptic Sensing

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Camera model and multiple view geometry

Projector Calibration for Pattern Projection Systems

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Robot Vision: Camera calibration

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION

Geometric camera models and calibration

Stereo Image Rectification for Simple Panoramic Image Generation

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

An Analytical Piecewise Radial Distortion Model for Precision Camera Calibration

CSE 252B: Computer Vision II

Rectification and Distortion Correction

Computer Vision Lecture 17

Computer Vision Lecture 17

calibrated coordinates Linear transformation pixel coordinates

Short on camera geometry and camera calibration

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Subpixel Corner Detection Using Spatial Moment 1)

Chapters 1 7: Overview

(Geometric) Camera Calibration

Calibrating an Overhead Video Camera

Model-based segmentation and recognition from range data

A General Expression of the Fundamental Matrix for Both Perspective and Affine Cameras

Mathematics of a Multiple Omni-Directional System

Planar pattern for automatic camera calibration

Stereo Vision. MAN-522 Computer Vision

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Accurate projector calibration method by using an optical coaxial camera

NAME VCamera camera model representation

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

CALIBRATION BETWEEN DEPTH AND COLOR SENSORS FOR COMMODITY DEPTH CAMERAS. Cha Zhang and Zhengyou Zhang

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Visual Recognition: Image Formation

Catadioptric camera model with conic mirror

Cameras and Stereo CSE 455. Linda Shapiro

A Calibration-and-Error Correction Method for Improved Texel (Fused Ladar/Digital Camera) Images

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Stereo Observation Models

arxiv: v1 [cs.cv] 28 Sep 2018

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Correcting Radial Distortion of Cameras With Wide Angle Lens Using Point Correspondences

Camera Model and Calibration

Computer Vision Projective Geometry and Calibration. Pinhole cameras

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri

Pin Hole Cameras & Warp Functions

An Overview of Matchmoving using Structure from Motion Methods

arxiv: v1 [cs.cv] 19 Sep 2017

Rectification and Disparity

INDOOR PTZ CAMERA CALIBRATION WITH CONCURRENT PT AXES

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Geometric camera calibration using circular control points

A three-step system calibration procedure with error compensation for 3D shape measurement

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Easy to Use Calibration of Multiple Camera Setups

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

A Calibration Algorithm for POX-Slits Camera

Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

EECS 4330/7330 Introduction to Mechatronics and Robotic Vision, Fall Lab 1. Camera Calibration

Geometry of image formation

Robust Camera Calibration from Images and Rotation Data

Measurement and Precision Analysis of Exterior Orientation Element Based on Landmark Point Auxiliary Orientation

Camera calibration. Robotic vision. Ville Kyrki

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

Self-calibration of a pair of stereo cameras in general position

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

EECS 442: Final Project

Hand-Eye Calibration from Image Derivatives

Compositing a bird's eye view mosaic

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Unit 3 Multiple View Geometry

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Infrared Camera Calibration in the 3D Temperature Field Reconstruction

Understanding Variability

A Summary of Projective Geometry

MERGING POINT CLOUDS FROM MULTIPLE KINECTS. Nishant Rai 13th July, 2016 CARIS Lab University of British Columbia

Bowling for Calibration: An Undemanding Camera Calibration Procedure Using a Sphere

An idea which can be used once is a trick. If it can be used more than once it becomes a method

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Pin Hole Cameras & Warp Functions

An Embedded Calibration Stereovision System

Camera models and calibration

Miniaturized Camera Systems for Microfactories

Dense 3D Reconstruction. Christiano Gava

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Image Coding with Active Appearance Models

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Euclidean Reconstruction Independent on Camera Intrinsic Parameters

Transcription:

Multichannel Camera Calibration Wei Li and Julie Klein Institute of Imaging and Computer Vision, RWTH Aachen University D-52056 Aachen, Germany ABSTRACT For the latest computer vision applications, it becomes more and more popular to take advantage of multichannel cameras (RGB cameras, etc.) to obtain not only gray values but also color information of pixels. The currently most common approach for multichannel camera calibration is the straightforward application of methods developed for calibration of single channel cameras. These conventional calibration methods may give quite poor performances including color fringes and displacement of features, especially for high-resolution multichannel cameras. In order to suppress the undesired effects, a novel multichannel camera calibration approach is introduced and evaluated in this paper. This approach considers each single channel individually and involves different transversal chromatic aberration models. In comparison to the standard approach, the proposed approach provides more accurate calibration results in most cases and should lead subsequently to more reliable estimation results for computer vision issues. Moreover, besides the existing transversal chromatic aberration (TCA) model, further TCA models and correction methods are introduced which are superior to the existing ones. Since the proposed approach is based on the most popular calibration routine, only minimal modifications have to be made to the existing approaches to obtain the improved calibration quality. Keywords: Camera Calibration, Multichannel Camera, Chromatic Correction, Chromatic Aberration 1. INTRODUCTION Panorama generation, 3D reconstruction, pose and motion estimation using stereo cameras or single cameras with structured patterns are nowadays frequently encountered computer vision issues. The very first step of applications related to such issues is the calibration of the cameras used for image acquisition. This field has been intensively investigated and a large number of publications are available on this topic. Heikkilä and Silven 1 reviewed distortion parameters suggested in literature 2,3,4,5 and derived a comprehensive camera model by combining the fundamental pinhole model with the radial and tangential distortion components. This resulted in a four-step calibration procedure as an extension to the two-step method 6 developed by Tsai. Zhang 7 introduced a flexible calibration technique with a closed-form solution. It is easy to use and expensive calibration objects are no more required. Datta et al. 8 developed an accurate calibration algorithm based on iterative projection of images into a canonical fronto-parallel plane and refinement of control points. For single channel cameras, satisfying results can be obtained using these models and approaches. For the latest computer vision applications, it is of increasing interest to take advantage of multichannel cameras (RGB cameras, etc.) in order to obtain not only gray values but also color information of pixels. If the calibration methods developed for single channel cameras are applied directly for the calibration of such multichannel cameras, they may give quite poor performances due to uncorrected chromatic aberrations: color fringes appear in the image, features are shifted from one channel to another. This is especially the case for high-resolution multichannel cameras. Alone to chromatic aberrations, as discussed in many papers, the undesired effects can be, partly or totally, removed applying calibration-based or image analysis-based approaches. Mallon and Whelan 9 illustrated in their paper how transversal aberration can be compensated through polynomial-based color plane realignment. Significant improvement can be obtained even for charge-coupled device (CCD) cameras with color filter array (CFA). Klein et al. 10,11 investigated the wavelength dependency of transversal chromatic aberration in multispectral imaging. Using measurements of multiple narrowband color channels, parameters of bivariate, polynomialbased transversal aberration models for both space and wavelength dependency were estimated. Without any Further author information: (Send correspondence to Wei Li) Wei Li: E-mail: wei.li@lfb.rwth-aachen.de, Telephone: +49 (241) 80 27864

calibration-based preprocessing, Chung et al. 12 introduced an image processing method eliminating both axial and transversal aberrations simultaneously for single images. So far, most chromatic aberration corrections are only aiming at obtaining superior quality of color images without consideration of other issues, for instance, exact feature positions, which are also important for most computer vision applications. In this paper, a novel, non-unique camera model-based calibration approach for multichannel cameras is proposed to represent the color and the spatial information simultaneously correctly. It outperforms the standard calibration approach in most cases. As fundamentals, the standard calibration approach is briefly introduced in Sec. 2. The problems along with it are also illustrated and analyzed. In Sec. 3, the novel calibration approach based on different transversal chromatic aberration models is described. By means of evaluation results presented in Sec. 4, conclusions are drawn and summarized in Sec. 5. The main focus throughout this paper is on RGB cameras since they are the most widely used multichannel cameras. 2.1 Approach 2. STANDARD CALIBRATION Most of the current camera calibration approaches are based on the models and methods introduced by Heikkilä and Silven 1 and Zhang 7. Using one unique pinhole camera model for all channels of a camera, a 3D object point with the homogeneous coordinates X = [X, Y, Z, 1] T in the camera frame is projected into a point in the ideal image plane with the pixel coordinates p = [u, v, 1] T according to F x α F x C x p = K x = 0 F y C y x (1) 0 0 1 where K is the invertible camera intrinsic matrix and x denotes the normalized coordinates of X with x = [X/Z, Y/Z, 1] T. F x, F y are the focal lengths along the two image axes, [C x, C y ] T denotes the principle point coordinates and α is the skew coefficient related to the angle between the two image axes. This pinhole camera model gives only a simplified 3D to 2D projection and is not suitable if high accuracy of camera calibration is required. To deal with this problem, the model is further extended with distortion of lens systems. Without loss of generality, the distortion from a point with normalized coordinates x to a point with distorted normalized coordinates x is described by x = f(x). (2) Typical distortion components are radial and tangential as introduced by Brown 13. x is then projected into the image plane with the distorted pixel coordinates p = [u, v, 1] T following Combining Eqs. (1), (2) and (3), p can be restored according to with f 1 ( ) the inverse function of f( ) if K and f( ) are known. p = K x. (3) p = K f 1 (K 1 p ) (4) Using a well-designed calibration pattern (checkerboard, planar circle or ring pattern), feature points are extracted in images of this pattern that are captured for different positions and orientations. With the determined 2D positions of planar feature points, K and f( ) can be solved employing diverse numerical approaches regarding the defined distortion model. All relevant parameters of K and f( ) are optimally estimated to minimize the overall reprojection error of feature points in original images (typically least squares). Therefore, the extrinsic parameters of the camera, i.e., rotation matrix R and translation vector t of the camera frame with respect to each pattern frame of the calibration images, are solved simultaneously.

polychromatic light lens system optical axis sensor plane transversal chromatic aberration axial chromatic aberration (a) (b) Figure 1. (a): Chromatic aberrations of a lens system for red and green light. Red and green light rays are indicated by red dash lines and green solid lines respectively. (b): The corner position is first determined in the RGB image corrected using the calibration data and marked with a red plus sign. Then, the corner position is determined in each channel R, G and B separately. These results are shown with a green cross in each image R, G and B. The distance between the plus signs and the crosses indicates that the color fringes and displacement of feature points caused by TCA remain uncorrected after the standard camera calibration. All corners are determined using gray values. 2.2 Problems In most scenarios of computer vision applications, the light sources used for illumination are polychromatic. Due to the different refractive indices of the involved lens system for different wavelengths, chromatic aberrations are unavoidable. They can be further divided into axial chromatic aberration (ACA) for the variation of the focal length and transversal chromatic aberration (TCA) for the displacement of the projections in the sensor plane of a camera from the same object points. In Fig. 1(a), chromatic aberrations are illustrated for red and green light. In this work, our aim is to analyse the effects of TCA on the camera calibration and to compensate for the aberrations. Some effects caused by TCA in a color image are illustrated in Fig. 1(b). Color fringes are clearly visible in the generated RGB image and give incorrect color information, especially for regions with high contrast like edges in a checkerboard pattern. These color fringes are due to the fact that one object point is not projected at the same pixel position in the R, G and B channels. Besides, we compared the positions where feature points (here: corners of the checkerboard) were detected in the RGB image with the positions where these feature points were detected in the single channels respectively. The feature positions in the RGB image (red plus signs) do not coincide with those in the single channels (green crosses) and the displacement is non-negligible: for R channel 1.2 pixels; for G channel 0.4 pixel; for B channel 1.0 pixel. This leads subsequently to inaccurate estimation of spatial information. The standard calibration approach using only a unique camera model for all channels is unable to remove these undesired effects, since it utilizes the RGB data, which is corrupted by TCA, and does not consider the color channels separately. 3. CALIBRATION WITH TRANSVERSAL CHROMATIC CORRECTION As shown in Fig. 1(a), the optical parameters are distinct for each wavelength and different camera models should thus be considered. But due to the limitation of physical devices, n camera models at most can be set up for an n-channel camera. In consideration of this, our approach treats each single channel of a multichannel camera individually while the standard approach uses a unique camera model for all channels. Using estimated models through calibration, pixels of different channels are remapped onto a reference plane after undistortion and correction of TCA. In the following subsections, for a better understanding of the proposed approach, diverse correction methods and the general calibration approach are introduced sequentially.

3.1 Polynomial-based Correction As described by Kingslake 14, TCA can be further divided into two different aberrations: chromatic variation of distortion and transversal color distortion caused by the refractive index of lens elements changing with wavelength. Mallon and Whelan 9 followed this idea and employed a polynomial-based model in their approach for removal of TCA in color images. For each color plane, i.e., for R, G and B in case of RGB cameras, and with respect to a same reference channel, for instance B, the chromatic variation of distortion is approximated using a polynomial up to the fourth order while the transversal color distortion is modeled as a linear term. Regarding this model, they were able to reduce significantly the pixel misalignments between the color planes R, G and B and thus improved the quality of images. This approach can also be further applied on any images captured by an n-channel camera. For illustration, the realignment of the pixels in the image plane of channel 2 with reference to the image plane of channel 1 is carried out based on a polynomial-based correction function c(θ 2 1, ) where θ 2 1 is a parameter vector with seven coefficients for the polynomial model of aberrations. The distorted pixel coordinates p 1 of a point of interest in the image plane of channel 1 can be determined using the distorted pixel coordinates p 2 of the same point in the image plane of channel 2 and there is p 1 = c(θ 2 1, p 2). (5) Despite the analysis of TCA model presented by Mallon and Whelan, they did not extend the polynomialbased model to the multichannel camera calibration for general computer vision issues. To utilize this model in the context of our work, we give here the straightforward extension: p 1 = K 1 f 1 1 (K 1 1 c(θ 2 1, p 2)) (6) which is immediately available after substituting Eq. (5) into Eq. (4) for channel 1. This equation gives the remapping of distorted pixels from the image plane of channel 2 into the ideal image plane of channel 1. In practice, to improve the computational efficiency, the inverted form of Eq. (6): p 2 = c(θ 1 2, K 1 f 1 (K 1 p 1 )) (7) is most commonly used due to the forward distortion function f 1 ( ). For convenience, the correction of TCA using this polynomial-based model is denoted by P-based correction. 3.2 Homography-based Correction The polynomial-based correction of TCA was introduced merely to improve the visual quality of color images and the principal aim of this model was not to achieve exact estimation of each feature position. This could lead to inaccuracy of the estimated aberration model. Instead of approximately modeled transversal color distortion and realignment of distorted pixel coordinates, a camera model and homography-based TCA correction realigning undistorted pixel coordinates is proposed. Formally, there are n projected image points of a given 3D object point in the n distinct ideal image planes of the n channels. Taking the camera frames of channel 1 and 2 for instance, the transformation between the homogeneous coordinates X 1 and X 2 of a same object point in the two camera frames is written as [ 1 ] R X 1 = 2 1 t 2 X 0 1 2 (8) = 1 T 2 X 2 where 1 R 2 and 1 t 2 denote respectively the rotation matrix and the translation vector for the coordinates transformation from the frame 2 to the frame 1; 1 T 2 is correspondingly the homogeneous transformation matrix. The normalized coordinates x 1 of X 1 and x 2 of X 2 in the two frames are associated as following: x 1 = Z 2 Z 1 1R 2 x 2 + 1 Z 1 1t 2. (9)

Since there are only slight translations and rotations between the camera frames of different channels, the approximations Z 2 /Z 1 1 and 1 t 2 p /Z 1 0 are considered to be valid for most applications where Z is much greater than the corresponding focal length and p is the p-norm of a vector. Therefore, it is reasonable to assume that there is always a homography 1 H 2 satisfying x 1 ( 1 H 2 x 2 ) 0 (10) for arbitrary object points, where denotes the vector cross product operator. Regarding the camera model described in Sec. 2 and Eqs. (1), (4), (10), the transformation of distorted pixel coordinates p 2 of channel 2 to the distortion-free pixel coordinates p 1 of channel 1 is written as p 1 K 1 λ 1H 2 f 1 2 (K 2 1 p 2) (11) where λ is a non-zero scale factor. In practice, the inverted form of Eq. (11): p 2 K 2 f 2 ( 2 H 1 1 λ K 1 1 p 1 ) (12) is most commonly used. For convenience, the correction of TCA using this homography-based model is denoted by H-based correction. Until now, there is always an unknown scale factor λ in Eqs. (11) and (12). This factor can be omitted by rescaling the term obtained after the coordinate transformation using homography 1 H 2 or 2 H 1 so that the rescaled term is in the form of normalized coordinates x = [X/Z, Y/Z, 1] T with X, Y arbitrary and Z 0. 3.3 Rotation Matrix-based Correction As discussed in the last subsection, the normalized coordinates x 1 and x 2 in the two different camera frames of channel 1 and 2 can be associated with each other using Eq. (9). Recalling the approximations Z 2 /Z 1 1 and 1 t 2 p /Z 1 0, Eq. (9) is simplified to x 1 1 R 2 x 2. (13) In contrast to the former model based on homography, the rotation matrix 1 R 2 is employed here to correct the TCA between different channels and there is In practice, similar to Eq. (12), the inverted form of Eq. (14): p 1 K 1 1R 2 f 1 2 (K 2 1 p 2). (14) p 2 K 2 f 2 ( 2 R 1 K 1 1 p 1 ) (15) is most commonly used. For convenience, the correction of TCA using this rotation matrix-based model is denoted by R-based correction. 3.4 Approach A great advantage of the proposed approach is its implementation based on the most used calibration routine which is extended in this paper with some pre- and postprocessing procedures. Therefore, only minimal modifications have to be made to the existing approaches to obtain improved calibration quality. As it is already stated, n camera models are considered for an n-channel camera to be calibrated. Using the same principle as described in Sec. (2), images of a planar pattern are taken for calibration purpose. Then, these multichannel images are decomposed into gray images which are grouped in such a way that each group contains all the calibration images corresponding to a single channel. Subsequently, the standard calibration approach is applied for each channel respectively. Up to this step, all camera intrinsic matrices K i and distortion functions f i ( ) with i {1,, n} for n channels are available. To compensate TCA between different channels and thus improve the accuracy of calibration results, pixels from all channels are remapped into the ideal image plane of a reference channel using the TCA corrections For instance, the B channel is chosen as the reference channel for single-chip RGB cameras of the color pattern RGGB.

introduced in Sec. 3.1 3.3 and following Eqs. (6), (11), (14) respectively. Without loss of generality, channel 1 is chosen here as the reference. The remapping of pixels from channel 2 to channel 1 is employed here to illustrate the proposed calibration approach. Before applying any remapping of pixels, the corresponding parameters must be determined for each correction: vector θ 2 1 of polynomial c(θ 2 1, ) for P-based correction, homography 1 H 2 for H-based correction and rotation matrix 1 R 2 for R-based correction. Since all relevant intrinsic and extrinsic parameters are simultaneously estimated for the standard calibration approach, positions of used feature points are also available in camera frames and in image planes. For P-based correction, as suggested by Mallon and Whelan 9, the 2D positions of feature points determined in decomposed gray images are used and the known correspondences between feature points in the image planes of channel 1 and 2 are considered. The parameter vector θ 2 1 is then solved iteratively in a Gauss-Newton scheme using robust least square techniques 15 according to ˆθ k+1 2 1 = ˆθ k 2 1 η ( et (ˆθ k 2 1) θ 2 1 Q e(ˆθ k 2 1) θ T 2 1 ) 1 et (ˆθ k 2 1) θ 2 1 e(ˆθ k 2 1) (16) where ˆθ k k+1 2 1 and ˆθ 2 1 are estimation of θ 2 1 in the k-th and (k+1)-th iteration respectively; e(ˆθ 2 1 ) is the remapping error with respect to the current estimation of θ 2 1 ; η is a constant with η < 1 ensuring for each iteration a decrease in cost J = e T Q e with estimated covariance matrix Q (normally assumed to be an identity matrix). For H-based correction, normalized 3D coordinates x of each feature point in camera frames of channel 1 and 2 are used to solve 1 H 2. Note that, regarding Eq. (10), x 1 and 1 H 2 x 2 may differ from one another in a scale factor λ, still λ has always a value close to 1. In consideration of this, 1 H 2 can be determined applying the direct linear transformation (DLT) algorithm 16,17. For R-based correction, according to Eq. (8), the rotation matrix 1 R 2 is also available if the transformation matrix 1 T 2 is known. It is assumed that there are all together M feature points in 3D used for calibration with M > 3. With X I,k and X II,k the homogeneous coordinates of the k-th feature in the camera frame of the 1st channel and in the camera frame of the 2nd channel respectively, XX I and XX II denote the coordinate matrices [X I,1, X I,2,, X I,k,, X I,M ] and [X II,1, X II,2,, X II,k,, X II,M ]. XX I and XX II are then associated using Eq. (8): XX I = 1 T 2 XX II. (17) In sense of least squares, 1 T 2 can be solved after rewriting Eq. (17) into 1 T 2 = XX I XX II T (XX II XX II T ) 1. (18) For better stability of computation and improved computational efficiency, normalization 18 of coordinates XX II and Cholesky decomposition of XX II XX II T could be considered. 4. PERFORMANCES In Sec. 3, we described a new multichannel camera calibration approach using different models for the correction of TCA. To evaluate the performances of the proposed approach with respect to the standard calibration, two complementary experiment setups were considered. The system to be calibrated consisted of a bayer-pattern RGB camera with a resolution of 2448 2050 pixels and a lens with a focal length of 16 mm. After calibrations, all acquired images were remapped using Eqs. (2), (3), (6) and (7). In the first experiment, we analyzed the effect of object colors on the aberrations remaining after the calibration of the target system. We first calibrated the system, with the standard approach and with the approach we proposed in this paper. Then, these calibrations were used to remap the acquired images. For evaluation, a given object available in several different colors was imaged, so that we could follow how feature points of this object were displaced in the remapped images for the different object colors. For each calibration approach and for each feature, we then calculated the standard deviation of the feature pixel positions over the different object colors:

standard deviation (in pixel) 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 no correction, RMS: 0.99 P based, RMS: 0.50 H based, RMS: 0.45 R based, RMS: 0.42 0 100 200 300 400 500 600 700 800 900 distance (in pixel) to principal point 0 100 200 300 400 500 600 700 800 900 distance (in pixel) to principal point (a) (b) Figure 2. Effects of (a) the object color and (b) the light source on the displacement of feature points in remapped images using the standard calibration approach (black triangles), our approach with P-based (green crosses), H-based (red circles) and R-based (blue points) corrections. All feature points are sorted regarding their distances to the principal point of images. Lower standard deviations of feature pixel positions indicate that the corresponding calibration is more robust against chromatic aberrations. The root mean square values (RMS) of the standard deviations are given for comparison. standard deviation (in pixel) 1.4 1.2 1 0.8 0.6 0.4 0.2 no correction, RMS: 0.93 P based, RMS: 0.52 H based, RMS: 0.51 R based, RMS: 0.48 the positions of the feature were determined in the gray images converted from corresponding RGB images after remapping. Our aim was to measure the displacements when objects have different reflectance spectra and are still illuminated by the same light source utilized for the calibration, namely a reflector lamp. We simulated this by acquiring one object illuminated by the light source that we limited spectrally using color filters with central wavelengths of 450, 500, 550, 600 and 650 nm and bandwidths of about 40 nm. This ensured that the 3D positions of the feature points remained the same for all the object colors considered. For each feature point, the standard deviation of the positions found for different object colors is plotted in Fig. 2(a). The horizontal axis gives the distances of the feature points to the principal point of images. The calibration approach we proposed outperforms the standard approach, which does not take TCA into account, with lower standard deviations, i.e., less remaining aberrations. The H-based and the R-based corrections are less sensitive to the variation of object colors than the P-based correction. In the second experiment, we analyzed the robustness of calibrations against the variation of light sources. First of all, the camera was calibrated under sunlight, using the standard and the proposed approaches. Then, images of the checkerboard were captured under different light sources: sunlight, white LEDs, blue LEDs and red LEDs. Finally, the calibration results obtained under sunlight were utilized for remapping of these acquired images. As explained before, we calculated the standard deviation of feature pixel positions over the different light sources. The results shown in Fig. 2(b) are similar to the first experiment: the proposed approach leads to calibration results more robust against the variation of illuminations. To obtain a more comprehensive evaluation of the standard and the proposed calibration approaches, further experiments are carried out for different optical systems. We utilized two RGB cameras with different resolutions and two lenses with different focal lengths. The evaluation results of all optical systems are summarized in Tab. 1, together with the results from the previously presented optical system. The two experiments, i.e., concerning object color and illumination respectively, are denoted experiment 1 and experiment 2. The standard deviation for each optical system is calculated as previously explained and are written in the two last columns of the table. The proposed approach using the P-based, the H-based or the R-based correction outperforms the standard approach without any correction of TCA (denoted as none in the table). The improvement obtained with our correction is evident on the optical system previously analized (resolution of 2448 2050 pixels and focal length of 16 mm), since the standard deviation is almost divided by a factor of two thanks to the correction. For other optical systems, the improvement is also significant. Even for a camera with relatively low resolution, our approach containing correction of the TCA leads to a lower standard deviation.

Table 1. Experiment results for different optical systems calibrated using the standard and the proposed calibration approaches. Each calibrated system was tested using the two complementary experiment setups and standard deviation are given for the standard calibration (i.e., without correction, denoted as none ) and for the calibration with TCA corrections using P-, H- and R-based methods. The experiment concerning the color of the acquired object is named experiment 1 and the experiment concerning the illumination is named experiment 2. sensor sensor lens deviations in pixel correction CFA resolution focal length experiment 1 experiment 2 bayer: RGGB 2448 2050 16 mm bayer: RGGB 2448 2050 25 mm bayer: RGGB 1280 960 25 mm none 0.99 0.93 P-based 0.50 0.52 H-based 0.45 0.51 R-based 0.42 0.48 none 0.29 0.35 P-based 0.22 0.25 H-based 0.19 0.19 R-based 0.18 0.21 none 0.20 0.23 P-based 0.19 0.21 H-based 0.16 0.17 R-based 0.18 0.19 5. CONCLUSIONS A novel multichannel camera calibration approach, considering each single channel individually and based on different transversal chromatic aberration models, is proposed in this paper. In comparison to the standard approach, the new approach provides more accurate calibration results in most cases (all our tests) and should lead subsequently to more reliable estimation results for computer vision issues. Moreover, besides the existing TCA models, further model and correction methods which outperform the existing ones are introduced. Since the proposed approach is based on the most used calibration routine, only minimal modifications have to be made to the existing approaches to obtain improved calibration quality. REFERENCES [1] J. Heikkila and O. Silven, A four-step camera calibration procedure with implicit image correction, in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1106 1112, 1997. [2] C. C. Slama, ed., Manual of Photogrammetry. American Society of Photogrammetry, Falls Church, Virginia, 1980. [3] T. Melen, Geometrical modelling and calibration of video cameras for underwater navigation. PhD thesis, Norges tekniske hgskole, Institutt for teknisk kybernetikk, 1994. [4] W. Faig, Calibration of close-range photogrammetric systems: Mathematical formulation, vol. 41 of Photogrammetric Engineering and Remote Sensing, pp. 1479 1486. American Society of Photogrammetry, Falls Church, Virginia, 1975. [5] J. Weng, P. Cohen, and M. Herniou, Camera calibration with distortion models and accuracy evaluation, IEEE Transactions on Pattern Analysis and Machine Intelligence 14, pp. 965 980, 1992. [6] R. Y. Tsai, A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses, IEEE Journal of Robotics and Automation 3, pp. 323 344, 1987. [7] Z. Zhang, Flexible camera calibration by viewing a plane from unknown orientations, in Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 666 673, 1999.

[8] A. Datta, J.-S. Kim, and T. Kanade, Accurate camera calibration using iterative refinement of control points, in Proceedings of IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1201 1208, 2009. [9] J. Mallon and P. F. Whelan, Calibration and removal of lateral chromatic aberration in images, Pattern Recognition Letters 28 (1), pp. 125 135, 2007. [10] J. Klein, J. Brauers, and T. Aach, Spatial and spectral analysis and modeling of transversal chromatic aberrations and their compensation, in Proc. IS&Ts 5th European Conference on Color in Graphics, Imaging, and Vision (CGIV), 12th International Symposium on Multispectral Colour science, pp. 516 522, IS&T, (Joensuu, Finland), June 14-17 2010. [11] J. Klein, J. Brauers, and T. Aach, Spatio-spectral modeling and compensation of transversal chromatic aberrations in multispectral imaging, Journal of Imaging Science and Technology 55, pp. No. 6, 1 14, 2011. [12] S.-W. Chung, B.-K. Kim, and W.-J. Song, Detecting and eliminating chromatic aberration in digital images, in Proceedings of IEEE International Conference on Image Processing (ICIP), pp. 3905 3908, 2009. [13] D. C. Brown, Close-range camera calibration, Photogrammetric Engineering 37, pp. 855 866, 1971. [14] R. Kingslake, Lens design fundamentals, Academic Press, San Diego, 1978. [15] G. H. Golub and C. F. V. Loan, Matrix Computation, 3rd Edition, Johns Hopkins University Press, 1996. [16] Y. I. Abdel-Aziz and H. M. Karara, Direct linear transformation into object space coordinates in closerange photogrammetry, in Proceedings of Symposium on Close-Range Photogrammetry, pp. 1 18, 1971. [17] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, ISBN: 0521623049, 2000. [18] R. Hartley, In defense of the eight-point algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence 19, pp. 580 593, 1997.